00:00:00.001 Started by upstream project "autotest-nightly" build number 3885 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3265 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.129 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.130 The recommended git tool is: git 00:00:00.130 using credential 00000000-0000-0000-0000-000000000002 00:00:00.132 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.184 Fetching changes from the remote Git repository 00:00:00.187 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.241 Using shallow fetch with depth 1 00:00:00.241 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.241 > git --version # timeout=10 00:00:00.278 > git --version # 'git version 2.39.2' 00:00:00.278 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.308 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.308 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.621 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.632 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.643 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:07.643 > git config core.sparsecheckout # timeout=10 00:00:07.653 > git read-tree -mu HEAD # timeout=10 00:00:07.669 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:07.686 Commit message: "inventory: add WCP3 to free inventory" 00:00:07.687 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:07.784 [Pipeline] Start of Pipeline 00:00:07.795 [Pipeline] library 00:00:07.796 Loading library shm_lib@master 00:00:07.796 Library shm_lib@master is cached. Copying from home. 00:00:07.810 [Pipeline] node 00:00:07.827 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.828 [Pipeline] { 00:00:07.839 [Pipeline] catchError 00:00:07.840 [Pipeline] { 00:00:07.852 [Pipeline] wrap 00:00:07.860 [Pipeline] { 00:00:07.866 [Pipeline] stage 00:00:07.868 [Pipeline] { (Prologue) 00:00:08.166 [Pipeline] sh 00:00:08.447 + logger -p user.info -t JENKINS-CI 00:00:08.463 [Pipeline] echo 00:00:08.464 Node: GP11 00:00:08.470 [Pipeline] sh 00:00:08.759 [Pipeline] setCustomBuildProperty 00:00:08.768 [Pipeline] echo 00:00:08.769 Cleanup processes 00:00:08.773 [Pipeline] sh 00:00:09.049 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.049 50864 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.059 [Pipeline] sh 00:00:09.336 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.336 ++ grep -v 'sudo pgrep' 00:00:09.336 ++ awk '{print $1}' 00:00:09.336 + sudo kill -9 00:00:09.336 + true 00:00:09.349 [Pipeline] cleanWs 00:00:09.357 [WS-CLEANUP] Deleting project workspace... 00:00:09.357 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.362 [WS-CLEANUP] done 00:00:09.365 [Pipeline] setCustomBuildProperty 00:00:09.373 [Pipeline] sh 00:00:09.648 + sudo git config --global --replace-all safe.directory '*' 00:00:09.737 [Pipeline] httpRequest 00:00:09.771 [Pipeline] echo 00:00:09.773 Sorcerer 10.211.164.101 is alive 00:00:09.783 [Pipeline] httpRequest 00:00:09.788 HttpMethod: GET 00:00:09.789 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.789 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.812 Response Code: HTTP/1.1 200 OK 00:00:09.812 Success: Status code 200 is in the accepted range: 200,404 00:00:09.813 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:14.563 [Pipeline] sh 00:00:14.849 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:14.869 [Pipeline] httpRequest 00:00:14.893 [Pipeline] echo 00:00:14.895 Sorcerer 10.211.164.101 is alive 00:00:14.905 [Pipeline] httpRequest 00:00:14.910 HttpMethod: GET 00:00:14.910 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:14.911 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:14.934 Response Code: HTTP/1.1 200 OK 00:00:14.935 Success: Status code 200 is in the accepted range: 200,404 00:00:14.936 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:22.667 [Pipeline] sh 00:01:22.952 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:25.530 [Pipeline] sh 00:01:25.807 + git -C spdk log --oneline -n5 00:01:25.807 719d03c6a sock/uring: only register net impl if supported 00:01:25.807 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:25.807 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:25.807 6c7c1f57e accel: add sequence outstanding stat 00:01:25.807 3bc8e6a26 accel: add utility to put task 00:01:25.818 [Pipeline] } 00:01:25.835 [Pipeline] // stage 00:01:25.844 [Pipeline] stage 00:01:25.846 [Pipeline] { (Prepare) 00:01:25.864 [Pipeline] writeFile 00:01:25.881 [Pipeline] sh 00:01:26.157 + logger -p user.info -t JENKINS-CI 00:01:26.170 [Pipeline] sh 00:01:26.478 + logger -p user.info -t JENKINS-CI 00:01:26.489 [Pipeline] sh 00:01:26.768 + cat autorun-spdk.conf 00:01:26.769 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.769 SPDK_TEST_NVMF=1 00:01:26.769 SPDK_TEST_NVME_CLI=1 00:01:26.769 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.769 SPDK_TEST_NVMF_NICS=e810 00:01:26.769 SPDK_RUN_ASAN=1 00:01:26.769 SPDK_RUN_UBSAN=1 00:01:26.769 NET_TYPE=phy 00:01:26.775 RUN_NIGHTLY=1 00:01:26.781 [Pipeline] readFile 00:01:26.807 [Pipeline] withEnv 00:01:26.809 [Pipeline] { 00:01:26.823 [Pipeline] sh 00:01:27.106 + set -ex 00:01:27.106 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:27.106 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:27.106 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.106 ++ SPDK_TEST_NVMF=1 00:01:27.106 ++ SPDK_TEST_NVME_CLI=1 00:01:27.106 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.106 ++ SPDK_TEST_NVMF_NICS=e810 00:01:27.106 ++ SPDK_RUN_ASAN=1 00:01:27.106 ++ SPDK_RUN_UBSAN=1 00:01:27.106 ++ NET_TYPE=phy 00:01:27.106 ++ RUN_NIGHTLY=1 00:01:27.106 + case $SPDK_TEST_NVMF_NICS in 00:01:27.106 + DRIVERS=ice 00:01:27.106 + [[ tcp == \r\d\m\a ]] 00:01:27.106 + [[ -n ice ]] 00:01:27.106 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:27.106 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:27.106 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:27.106 rmmod: ERROR: Module irdma is not currently loaded 00:01:27.106 rmmod: ERROR: Module i40iw is not currently loaded 00:01:27.106 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:27.106 + true 00:01:27.106 + for D in $DRIVERS 00:01:27.106 + sudo modprobe ice 00:01:27.106 + exit 0 00:01:27.116 [Pipeline] } 00:01:27.136 [Pipeline] // withEnv 00:01:27.142 [Pipeline] } 00:01:27.157 [Pipeline] // stage 00:01:27.167 [Pipeline] catchError 00:01:27.169 [Pipeline] { 00:01:27.185 [Pipeline] timeout 00:01:27.185 Timeout set to expire in 50 min 00:01:27.187 [Pipeline] { 00:01:27.204 [Pipeline] stage 00:01:27.205 [Pipeline] { (Tests) 00:01:27.221 [Pipeline] sh 00:01:27.501 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.502 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.502 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.502 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:27.502 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.502 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:27.502 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:27.502 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:27.502 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:27.502 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:27.502 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:27.502 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.502 + source /etc/os-release 00:01:27.502 ++ NAME='Fedora Linux' 00:01:27.502 ++ VERSION='38 (Cloud Edition)' 00:01:27.502 ++ ID=fedora 00:01:27.502 ++ VERSION_ID=38 00:01:27.502 ++ VERSION_CODENAME= 00:01:27.502 ++ PLATFORM_ID=platform:f38 00:01:27.502 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:27.502 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:27.502 ++ LOGO=fedora-logo-icon 00:01:27.502 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:27.502 ++ HOME_URL=https://fedoraproject.org/ 00:01:27.502 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:27.502 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:27.502 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:27.502 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:27.502 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:27.502 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:27.502 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:27.502 ++ SUPPORT_END=2024-05-14 00:01:27.502 ++ VARIANT='Cloud Edition' 00:01:27.502 ++ VARIANT_ID=cloud 00:01:27.502 + uname -a 00:01:27.502 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:27.502 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:28.435 Hugepages 00:01:28.435 node hugesize free / total 00:01:28.693 node0 1048576kB 0 / 0 00:01:28.693 node0 2048kB 0 / 0 00:01:28.693 node1 1048576kB 0 / 0 00:01:28.693 node1 2048kB 0 / 0 00:01:28.693 00:01:28.693 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:28.693 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:28.693 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:28.693 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:28.693 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:28.693 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:28.693 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:28.693 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:28.693 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:28.693 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:28.693 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:28.693 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:28.693 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:28.693 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:28.693 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:28.693 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:28.693 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:28.693 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:28.693 + rm -f /tmp/spdk-ld-path 00:01:28.693 + source autorun-spdk.conf 00:01:28.693 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.693 ++ SPDK_TEST_NVMF=1 00:01:28.693 ++ SPDK_TEST_NVME_CLI=1 00:01:28.693 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.693 ++ SPDK_TEST_NVMF_NICS=e810 00:01:28.693 ++ SPDK_RUN_ASAN=1 00:01:28.693 ++ SPDK_RUN_UBSAN=1 00:01:28.693 ++ NET_TYPE=phy 00:01:28.693 ++ RUN_NIGHTLY=1 00:01:28.693 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.693 + [[ -n '' ]] 00:01:28.693 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.693 + for M in /var/spdk/build-*-manifest.txt 00:01:28.693 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:28.693 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.693 + for M in /var/spdk/build-*-manifest.txt 00:01:28.693 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:28.693 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.693 ++ uname 00:01:28.693 + [[ Linux == \L\i\n\u\x ]] 00:01:28.693 + sudo dmesg -T 00:01:28.693 + sudo dmesg --clear 00:01:28.693 + dmesg_pid=52157 00:01:28.693 + [[ Fedora Linux == FreeBSD ]] 00:01:28.693 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.693 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.693 + sudo dmesg -Tw 00:01:28.693 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:28.693 + [[ -x /usr/src/fio-static/fio ]] 00:01:28.693 + export FIO_BIN=/usr/src/fio-static/fio 00:01:28.693 + FIO_BIN=/usr/src/fio-static/fio 00:01:28.693 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:28.693 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:28.693 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:28.693 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.693 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.693 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:28.693 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.693 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.693 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.693 Test configuration: 00:01:28.693 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.693 SPDK_TEST_NVMF=1 00:01:28.693 SPDK_TEST_NVME_CLI=1 00:01:28.693 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.693 SPDK_TEST_NVMF_NICS=e810 00:01:28.693 SPDK_RUN_ASAN=1 00:01:28.693 SPDK_RUN_UBSAN=1 00:01:28.693 NET_TYPE=phy 00:01:28.693 RUN_NIGHTLY=1 13:13:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:28.693 13:13:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:28.693 13:13:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:28.693 13:13:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:28.693 13:13:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.693 13:13:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.693 13:13:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.693 13:13:03 -- paths/export.sh@5 -- $ export PATH 00:01:28.693 13:13:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.693 13:13:03 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:28.693 13:13:03 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:28.693 13:13:03 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720869183.XXXXXX 00:01:28.952 13:13:03 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720869183.mcKXsi 00:01:28.952 13:13:03 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:28.952 13:13:03 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:28.952 13:13:03 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:28.952 13:13:03 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:28.952 13:13:03 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:28.952 13:13:03 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:28.952 13:13:03 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:28.952 13:13:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.952 13:13:03 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:28.952 13:13:03 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:28.952 13:13:03 -- pm/common@17 -- $ local monitor 00:01:28.952 13:13:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.952 13:13:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.952 13:13:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.952 13:13:03 -- pm/common@21 -- $ date +%s 00:01:28.952 13:13:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.952 13:13:03 -- pm/common@21 -- $ date +%s 00:01:28.952 13:13:03 -- pm/common@25 -- $ sleep 1 00:01:28.952 13:13:03 -- pm/common@21 -- $ date +%s 00:01:28.952 13:13:03 -- pm/common@21 -- $ date +%s 00:01:28.952 13:13:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720869183 00:01:28.952 13:13:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720869183 00:01:28.952 13:13:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720869183 00:01:28.952 13:13:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720869183 00:01:28.952 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720869183_collect-vmstat.pm.log 00:01:28.952 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720869183_collect-cpu-load.pm.log 00:01:28.952 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720869183_collect-cpu-temp.pm.log 00:01:28.952 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720869183_collect-bmc-pm.bmc.pm.log 00:01:29.886 13:13:04 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:29.886 13:13:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:29.886 13:13:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:29.886 13:13:04 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.886 13:13:04 -- spdk/autobuild.sh@16 -- $ date -u 00:01:29.886 Sat Jul 13 11:13:04 AM UTC 2024 00:01:29.886 13:13:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:29.886 v24.09-pre-202-g719d03c6a 00:01:29.886 13:13:04 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:29.886 13:13:04 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:29.886 13:13:04 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:29.886 13:13:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:29.886 13:13:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.886 ************************************ 00:01:29.886 START TEST asan 00:01:29.886 ************************************ 00:01:29.886 13:13:04 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:29.886 using asan 00:01:29.886 00:01:29.886 real 0m0.000s 00:01:29.886 user 0m0.000s 00:01:29.886 sys 0m0.000s 00:01:29.886 13:13:04 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:29.886 13:13:04 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.886 ************************************ 00:01:29.886 END TEST asan 00:01:29.886 ************************************ 00:01:29.886 13:13:04 -- common/autotest_common.sh@1142 -- $ return 0 00:01:29.886 13:13:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:29.886 13:13:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:29.886 13:13:04 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:29.886 13:13:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:29.886 13:13:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.886 ************************************ 00:01:29.886 START TEST ubsan 00:01:29.886 ************************************ 00:01:29.886 13:13:04 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:29.886 using ubsan 00:01:29.886 00:01:29.886 real 0m0.000s 00:01:29.886 user 0m0.000s 00:01:29.886 sys 0m0.000s 00:01:29.886 13:13:04 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:29.886 13:13:04 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.886 ************************************ 00:01:29.886 END TEST ubsan 00:01:29.886 ************************************ 00:01:29.886 13:13:04 -- common/autotest_common.sh@1142 -- $ return 0 00:01:29.886 13:13:04 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:29.886 13:13:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:29.886 13:13:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:29.886 13:13:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:29.886 13:13:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:29.886 13:13:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:29.886 13:13:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:29.886 13:13:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:29.886 13:13:04 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:29.886 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:29.886 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:30.451 Using 'verbs' RDMA provider 00:01:40.992 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:51.031 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:51.031 Creating mk/config.mk...done. 00:01:51.031 Creating mk/cc.flags.mk...done. 00:01:51.031 Type 'make' to build. 00:01:51.031 13:13:24 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:51.031 13:13:24 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:51.031 13:13:24 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:51.031 13:13:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.031 ************************************ 00:01:51.031 START TEST make 00:01:51.031 ************************************ 00:01:51.031 13:13:24 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:51.031 make[1]: Nothing to be done for 'all'. 00:01:59.161 The Meson build system 00:01:59.161 Version: 1.3.1 00:01:59.161 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:59.161 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:59.161 Build type: native build 00:01:59.161 Program cat found: YES (/usr/bin/cat) 00:01:59.161 Project name: DPDK 00:01:59.161 Project version: 24.03.0 00:01:59.161 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:59.161 C linker for the host machine: cc ld.bfd 2.39-16 00:01:59.161 Host machine cpu family: x86_64 00:01:59.161 Host machine cpu: x86_64 00:01:59.161 Message: ## Building in Developer Mode ## 00:01:59.161 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.161 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:59.161 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.161 Program python3 found: YES (/usr/bin/python3) 00:01:59.161 Program cat found: YES (/usr/bin/cat) 00:01:59.161 Compiler for C supports arguments -march=native: YES 00:01:59.161 Checking for size of "void *" : 8 00:01:59.161 Checking for size of "void *" : 8 (cached) 00:01:59.161 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:59.161 Library m found: YES 00:01:59.161 Library numa found: YES 00:01:59.161 Has header "numaif.h" : YES 00:01:59.161 Library fdt found: NO 00:01:59.161 Library execinfo found: NO 00:01:59.161 Has header "execinfo.h" : YES 00:01:59.161 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:59.161 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.161 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.161 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.161 Run-time dependency openssl found: YES 3.0.9 00:01:59.161 Run-time dependency libpcap found: YES 1.10.4 00:01:59.161 Has header "pcap.h" with dependency libpcap: YES 00:01:59.161 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.161 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.161 Compiler for C supports arguments -Wformat: YES 00:01:59.161 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.161 Compiler for C supports arguments -Wformat-security: NO 00:01:59.161 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.161 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.161 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.161 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.161 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.161 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.161 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.161 Compiler for C supports arguments -Wundef: YES 00:01:59.161 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.161 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.161 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.161 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.161 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.161 Program objdump found: YES (/usr/bin/objdump) 00:01:59.161 Compiler for C supports arguments -mavx512f: YES 00:01:59.161 Checking if "AVX512 checking" compiles: YES 00:01:59.161 Fetching value of define "__SSE4_2__" : 1 00:01:59.161 Fetching value of define "__AES__" : 1 00:01:59.161 Fetching value of define "__AVX__" : 1 00:01:59.161 Fetching value of define "__AVX2__" : (undefined) 00:01:59.161 Fetching value of define "__AVX512BW__" : (undefined) 00:01:59.161 Fetching value of define "__AVX512CD__" : (undefined) 00:01:59.161 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:59.161 Fetching value of define "__AVX512F__" : (undefined) 00:01:59.161 Fetching value of define "__AVX512VL__" : (undefined) 00:01:59.161 Fetching value of define "__PCLMUL__" : 1 00:01:59.161 Fetching value of define "__RDRND__" : 1 00:01:59.161 Fetching value of define "__RDSEED__" : (undefined) 00:01:59.161 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.161 Fetching value of define "__znver1__" : (undefined) 00:01:59.161 Fetching value of define "__znver2__" : (undefined) 00:01:59.161 Fetching value of define "__znver3__" : (undefined) 00:01:59.161 Fetching value of define "__znver4__" : (undefined) 00:01:59.161 Library asan found: YES 00:01:59.161 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.161 Message: lib/log: Defining dependency "log" 00:01:59.161 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.161 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.161 Library rt found: YES 00:01:59.161 Checking for function "getentropy" : NO 00:01:59.161 Message: lib/eal: Defining dependency "eal" 00:01:59.161 Message: lib/ring: Defining dependency "ring" 00:01:59.161 Message: lib/rcu: Defining dependency "rcu" 00:01:59.161 Message: lib/mempool: Defining dependency "mempool" 00:01:59.161 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.161 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.161 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.161 Compiler for C supports arguments -mpclmul: YES 00:01:59.161 Compiler for C supports arguments -maes: YES 00:01:59.161 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.161 Compiler for C supports arguments -mavx512bw: YES 00:01:59.161 Compiler for C supports arguments -mavx512dq: YES 00:01:59.161 Compiler for C supports arguments -mavx512vl: YES 00:01:59.161 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.161 Compiler for C supports arguments -mavx2: YES 00:01:59.161 Compiler for C supports arguments -mavx: YES 00:01:59.161 Message: lib/net: Defining dependency "net" 00:01:59.161 Message: lib/meter: Defining dependency "meter" 00:01:59.161 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.161 Message: lib/pci: Defining dependency "pci" 00:01:59.161 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.161 Message: lib/hash: Defining dependency "hash" 00:01:59.161 Message: lib/timer: Defining dependency "timer" 00:01:59.161 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.161 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.161 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.161 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.161 Message: lib/power: Defining dependency "power" 00:01:59.161 Message: lib/reorder: Defining dependency "reorder" 00:01:59.161 Message: lib/security: Defining dependency "security" 00:01:59.161 Has header "linux/userfaultfd.h" : YES 00:01:59.161 Has header "linux/vduse.h" : YES 00:01:59.161 Message: lib/vhost: Defining dependency "vhost" 00:01:59.161 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:59.161 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:59.161 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:59.161 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:59.161 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:59.161 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:59.161 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:59.161 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:59.161 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:59.161 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:59.161 Program doxygen found: YES (/usr/bin/doxygen) 00:01:59.161 Configuring doxy-api-html.conf using configuration 00:01:59.161 Configuring doxy-api-man.conf using configuration 00:01:59.161 Program mandb found: YES (/usr/bin/mandb) 00:01:59.161 Program sphinx-build found: NO 00:01:59.161 Configuring rte_build_config.h using configuration 00:01:59.161 Message: 00:01:59.161 ================= 00:01:59.161 Applications Enabled 00:01:59.161 ================= 00:01:59.161 00:01:59.161 apps: 00:01:59.161 00:01:59.161 00:01:59.161 Message: 00:01:59.161 ================= 00:01:59.161 Libraries Enabled 00:01:59.161 ================= 00:01:59.161 00:01:59.161 libs: 00:01:59.161 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:59.161 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:59.161 cryptodev, dmadev, power, reorder, security, vhost, 00:01:59.161 00:01:59.161 Message: 00:01:59.161 =============== 00:01:59.161 Drivers Enabled 00:01:59.161 =============== 00:01:59.161 00:01:59.161 common: 00:01:59.161 00:01:59.161 bus: 00:01:59.161 pci, vdev, 00:01:59.161 mempool: 00:01:59.161 ring, 00:01:59.161 dma: 00:01:59.161 00:01:59.161 net: 00:01:59.161 00:01:59.161 crypto: 00:01:59.161 00:01:59.161 compress: 00:01:59.161 00:01:59.161 vdpa: 00:01:59.161 00:01:59.161 00:01:59.161 Message: 00:01:59.161 ================= 00:01:59.161 Content Skipped 00:01:59.161 ================= 00:01:59.161 00:01:59.161 apps: 00:01:59.161 dumpcap: explicitly disabled via build config 00:01:59.161 graph: explicitly disabled via build config 00:01:59.161 pdump: explicitly disabled via build config 00:01:59.161 proc-info: explicitly disabled via build config 00:01:59.161 test-acl: explicitly disabled via build config 00:01:59.161 test-bbdev: explicitly disabled via build config 00:01:59.161 test-cmdline: explicitly disabled via build config 00:01:59.161 test-compress-perf: explicitly disabled via build config 00:01:59.161 test-crypto-perf: explicitly disabled via build config 00:01:59.161 test-dma-perf: explicitly disabled via build config 00:01:59.161 test-eventdev: explicitly disabled via build config 00:01:59.161 test-fib: explicitly disabled via build config 00:01:59.162 test-flow-perf: explicitly disabled via build config 00:01:59.162 test-gpudev: explicitly disabled via build config 00:01:59.162 test-mldev: explicitly disabled via build config 00:01:59.162 test-pipeline: explicitly disabled via build config 00:01:59.162 test-pmd: explicitly disabled via build config 00:01:59.162 test-regex: explicitly disabled via build config 00:01:59.162 test-sad: explicitly disabled via build config 00:01:59.162 test-security-perf: explicitly disabled via build config 00:01:59.162 00:01:59.162 libs: 00:01:59.162 argparse: explicitly disabled via build config 00:01:59.162 metrics: explicitly disabled via build config 00:01:59.162 acl: explicitly disabled via build config 00:01:59.162 bbdev: explicitly disabled via build config 00:01:59.162 bitratestats: explicitly disabled via build config 00:01:59.162 bpf: explicitly disabled via build config 00:01:59.162 cfgfile: explicitly disabled via build config 00:01:59.162 distributor: explicitly disabled via build config 00:01:59.162 efd: explicitly disabled via build config 00:01:59.162 eventdev: explicitly disabled via build config 00:01:59.162 dispatcher: explicitly disabled via build config 00:01:59.162 gpudev: explicitly disabled via build config 00:01:59.162 gro: explicitly disabled via build config 00:01:59.162 gso: explicitly disabled via build config 00:01:59.162 ip_frag: explicitly disabled via build config 00:01:59.162 jobstats: explicitly disabled via build config 00:01:59.162 latencystats: explicitly disabled via build config 00:01:59.162 lpm: explicitly disabled via build config 00:01:59.162 member: explicitly disabled via build config 00:01:59.162 pcapng: explicitly disabled via build config 00:01:59.162 rawdev: explicitly disabled via build config 00:01:59.162 regexdev: explicitly disabled via build config 00:01:59.162 mldev: explicitly disabled via build config 00:01:59.162 rib: explicitly disabled via build config 00:01:59.162 sched: explicitly disabled via build config 00:01:59.162 stack: explicitly disabled via build config 00:01:59.162 ipsec: explicitly disabled via build config 00:01:59.162 pdcp: explicitly disabled via build config 00:01:59.162 fib: explicitly disabled via build config 00:01:59.162 port: explicitly disabled via build config 00:01:59.162 pdump: explicitly disabled via build config 00:01:59.162 table: explicitly disabled via build config 00:01:59.162 pipeline: explicitly disabled via build config 00:01:59.162 graph: explicitly disabled via build config 00:01:59.162 node: explicitly disabled via build config 00:01:59.162 00:01:59.162 drivers: 00:01:59.162 common/cpt: not in enabled drivers build config 00:01:59.162 common/dpaax: not in enabled drivers build config 00:01:59.162 common/iavf: not in enabled drivers build config 00:01:59.162 common/idpf: not in enabled drivers build config 00:01:59.162 common/ionic: not in enabled drivers build config 00:01:59.162 common/mvep: not in enabled drivers build config 00:01:59.162 common/octeontx: not in enabled drivers build config 00:01:59.162 bus/auxiliary: not in enabled drivers build config 00:01:59.162 bus/cdx: not in enabled drivers build config 00:01:59.162 bus/dpaa: not in enabled drivers build config 00:01:59.162 bus/fslmc: not in enabled drivers build config 00:01:59.162 bus/ifpga: not in enabled drivers build config 00:01:59.162 bus/platform: not in enabled drivers build config 00:01:59.162 bus/uacce: not in enabled drivers build config 00:01:59.162 bus/vmbus: not in enabled drivers build config 00:01:59.162 common/cnxk: not in enabled drivers build config 00:01:59.162 common/mlx5: not in enabled drivers build config 00:01:59.162 common/nfp: not in enabled drivers build config 00:01:59.162 common/nitrox: not in enabled drivers build config 00:01:59.162 common/qat: not in enabled drivers build config 00:01:59.162 common/sfc_efx: not in enabled drivers build config 00:01:59.162 mempool/bucket: not in enabled drivers build config 00:01:59.162 mempool/cnxk: not in enabled drivers build config 00:01:59.162 mempool/dpaa: not in enabled drivers build config 00:01:59.162 mempool/dpaa2: not in enabled drivers build config 00:01:59.162 mempool/octeontx: not in enabled drivers build config 00:01:59.162 mempool/stack: not in enabled drivers build config 00:01:59.162 dma/cnxk: not in enabled drivers build config 00:01:59.162 dma/dpaa: not in enabled drivers build config 00:01:59.162 dma/dpaa2: not in enabled drivers build config 00:01:59.162 dma/hisilicon: not in enabled drivers build config 00:01:59.162 dma/idxd: not in enabled drivers build config 00:01:59.162 dma/ioat: not in enabled drivers build config 00:01:59.162 dma/skeleton: not in enabled drivers build config 00:01:59.162 net/af_packet: not in enabled drivers build config 00:01:59.162 net/af_xdp: not in enabled drivers build config 00:01:59.162 net/ark: not in enabled drivers build config 00:01:59.162 net/atlantic: not in enabled drivers build config 00:01:59.162 net/avp: not in enabled drivers build config 00:01:59.162 net/axgbe: not in enabled drivers build config 00:01:59.162 net/bnx2x: not in enabled drivers build config 00:01:59.162 net/bnxt: not in enabled drivers build config 00:01:59.162 net/bonding: not in enabled drivers build config 00:01:59.162 net/cnxk: not in enabled drivers build config 00:01:59.162 net/cpfl: not in enabled drivers build config 00:01:59.162 net/cxgbe: not in enabled drivers build config 00:01:59.162 net/dpaa: not in enabled drivers build config 00:01:59.162 net/dpaa2: not in enabled drivers build config 00:01:59.162 net/e1000: not in enabled drivers build config 00:01:59.162 net/ena: not in enabled drivers build config 00:01:59.162 net/enetc: not in enabled drivers build config 00:01:59.162 net/enetfec: not in enabled drivers build config 00:01:59.162 net/enic: not in enabled drivers build config 00:01:59.162 net/failsafe: not in enabled drivers build config 00:01:59.162 net/fm10k: not in enabled drivers build config 00:01:59.162 net/gve: not in enabled drivers build config 00:01:59.162 net/hinic: not in enabled drivers build config 00:01:59.162 net/hns3: not in enabled drivers build config 00:01:59.162 net/i40e: not in enabled drivers build config 00:01:59.162 net/iavf: not in enabled drivers build config 00:01:59.162 net/ice: not in enabled drivers build config 00:01:59.162 net/idpf: not in enabled drivers build config 00:01:59.162 net/igc: not in enabled drivers build config 00:01:59.162 net/ionic: not in enabled drivers build config 00:01:59.162 net/ipn3ke: not in enabled drivers build config 00:01:59.162 net/ixgbe: not in enabled drivers build config 00:01:59.162 net/mana: not in enabled drivers build config 00:01:59.162 net/memif: not in enabled drivers build config 00:01:59.162 net/mlx4: not in enabled drivers build config 00:01:59.162 net/mlx5: not in enabled drivers build config 00:01:59.162 net/mvneta: not in enabled drivers build config 00:01:59.162 net/mvpp2: not in enabled drivers build config 00:01:59.162 net/netvsc: not in enabled drivers build config 00:01:59.162 net/nfb: not in enabled drivers build config 00:01:59.162 net/nfp: not in enabled drivers build config 00:01:59.162 net/ngbe: not in enabled drivers build config 00:01:59.162 net/null: not in enabled drivers build config 00:01:59.162 net/octeontx: not in enabled drivers build config 00:01:59.162 net/octeon_ep: not in enabled drivers build config 00:01:59.162 net/pcap: not in enabled drivers build config 00:01:59.162 net/pfe: not in enabled drivers build config 00:01:59.162 net/qede: not in enabled drivers build config 00:01:59.162 net/ring: not in enabled drivers build config 00:01:59.162 net/sfc: not in enabled drivers build config 00:01:59.162 net/softnic: not in enabled drivers build config 00:01:59.162 net/tap: not in enabled drivers build config 00:01:59.162 net/thunderx: not in enabled drivers build config 00:01:59.162 net/txgbe: not in enabled drivers build config 00:01:59.162 net/vdev_netvsc: not in enabled drivers build config 00:01:59.162 net/vhost: not in enabled drivers build config 00:01:59.162 net/virtio: not in enabled drivers build config 00:01:59.162 net/vmxnet3: not in enabled drivers build config 00:01:59.162 raw/*: missing internal dependency, "rawdev" 00:01:59.162 crypto/armv8: not in enabled drivers build config 00:01:59.162 crypto/bcmfs: not in enabled drivers build config 00:01:59.162 crypto/caam_jr: not in enabled drivers build config 00:01:59.162 crypto/ccp: not in enabled drivers build config 00:01:59.162 crypto/cnxk: not in enabled drivers build config 00:01:59.162 crypto/dpaa_sec: not in enabled drivers build config 00:01:59.162 crypto/dpaa2_sec: not in enabled drivers build config 00:01:59.162 crypto/ipsec_mb: not in enabled drivers build config 00:01:59.162 crypto/mlx5: not in enabled drivers build config 00:01:59.162 crypto/mvsam: not in enabled drivers build config 00:01:59.162 crypto/nitrox: not in enabled drivers build config 00:01:59.162 crypto/null: not in enabled drivers build config 00:01:59.162 crypto/octeontx: not in enabled drivers build config 00:01:59.162 crypto/openssl: not in enabled drivers build config 00:01:59.162 crypto/scheduler: not in enabled drivers build config 00:01:59.162 crypto/uadk: not in enabled drivers build config 00:01:59.162 crypto/virtio: not in enabled drivers build config 00:01:59.162 compress/isal: not in enabled drivers build config 00:01:59.162 compress/mlx5: not in enabled drivers build config 00:01:59.162 compress/nitrox: not in enabled drivers build config 00:01:59.162 compress/octeontx: not in enabled drivers build config 00:01:59.162 compress/zlib: not in enabled drivers build config 00:01:59.162 regex/*: missing internal dependency, "regexdev" 00:01:59.162 ml/*: missing internal dependency, "mldev" 00:01:59.162 vdpa/ifc: not in enabled drivers build config 00:01:59.162 vdpa/mlx5: not in enabled drivers build config 00:01:59.162 vdpa/nfp: not in enabled drivers build config 00:01:59.162 vdpa/sfc: not in enabled drivers build config 00:01:59.162 event/*: missing internal dependency, "eventdev" 00:01:59.162 baseband/*: missing internal dependency, "bbdev" 00:01:59.162 gpu/*: missing internal dependency, "gpudev" 00:01:59.162 00:01:59.162 00:01:59.162 Build targets in project: 85 00:01:59.162 00:01:59.162 DPDK 24.03.0 00:01:59.162 00:01:59.162 User defined options 00:01:59.162 buildtype : debug 00:01:59.162 default_library : shared 00:01:59.162 libdir : lib 00:01:59.162 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:59.162 b_sanitize : address 00:01:59.162 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:59.162 c_link_args : 00:01:59.162 cpu_instruction_set: native 00:01:59.162 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:59.162 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:59.162 enable_docs : false 00:01:59.162 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:59.162 enable_kmods : false 00:01:59.162 max_lcores : 128 00:01:59.162 tests : false 00:01:59.162 00:01:59.162 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:59.733 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:59.733 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:59.733 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:59.733 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:59.733 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:59.733 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:59.733 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:59.733 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:59.733 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:59.733 [9/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:59.733 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:59.733 [11/268] Linking static target lib/librte_kvargs.a 00:01:59.733 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:59.733 [13/268] Linking static target lib/librte_log.a 00:01:59.733 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:59.733 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:59.995 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:00.571 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.571 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.571 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:00.571 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:00.571 [21/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.571 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.571 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:00.571 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:00.571 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:00.571 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.571 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.571 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:00.571 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:00.571 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:00.571 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:00.571 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:00.571 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:00.571 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:00.571 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:00.571 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:00.571 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:00.571 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:00.571 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:00.571 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:00.571 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:00.571 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:00.571 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:00.571 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.571 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:00.571 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:00.571 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:00.833 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:00.833 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:00.833 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:00.833 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:00.833 [52/268] Linking static target lib/librte_telemetry.a 00:02:00.833 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:00.833 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:00.833 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:00.833 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:00.833 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:00.833 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:00.833 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:00.833 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:00.833 [61/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.093 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.093 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:01.093 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:01.093 [65/268] Linking target lib/librte_log.so.24.1 00:02:01.093 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:01.352 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:01.352 [68/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:01.623 [69/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:01.623 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:01.623 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:01.623 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:01.623 [73/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:01.623 [74/268] Linking static target lib/librte_pci.a 00:02:01.623 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:01.623 [76/268] Linking target lib/librte_kvargs.so.24.1 00:02:01.623 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:01.623 [78/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:01.623 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:01.623 [80/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:01.623 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:01.623 [82/268] Linking static target lib/librte_ring.a 00:02:01.623 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:01.623 [84/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:01.623 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:01.623 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:01.623 [87/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:01.623 [88/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:01.623 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:01.623 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:01.623 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:01.623 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:01.881 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:01.882 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:01.882 [95/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.882 [96/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:01.882 [97/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:01.882 [98/268] Linking static target lib/librte_meter.a 00:02:01.882 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:01.882 [100/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:01.882 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:01.882 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:01.882 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:01.882 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:01.882 [105/268] Linking target lib/librte_telemetry.so.24.1 00:02:01.882 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:01.882 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:01.882 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:01.882 [109/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:01.882 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:01.882 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:01.882 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:01.882 [113/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:02.147 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:02.147 [115/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:02.147 [116/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:02.147 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:02.147 [118/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:02.147 [119/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.147 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:02.147 [121/268] Linking static target lib/librte_mempool.a 00:02:02.147 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:02.147 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:02.147 [124/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:02.147 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:02.147 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:02.147 [127/268] Linking static target lib/librte_rcu.a 00:02:02.147 [128/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:02.147 [129/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.406 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:02.406 [131/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.406 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:02.406 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:02.406 [134/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:02.667 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:02.667 [136/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:02.667 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:02.667 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:02.667 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:02.667 [140/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.667 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:02.667 [142/268] Linking static target lib/librte_cmdline.a 00:02:02.667 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:02.667 [144/268] Linking static target lib/librte_eal.a 00:02:02.927 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:02.927 [146/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:02.927 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:02.927 [148/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:02.927 [149/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:02.927 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:02.927 [151/268] Linking static target lib/librte_timer.a 00:02:02.927 [152/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.927 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:02.927 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:02.927 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:02.927 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:02.927 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:03.188 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:03.188 [159/268] Linking static target lib/librte_dmadev.a 00:02:03.188 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:03.188 [161/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.188 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:03.188 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:03.188 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:03.446 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.446 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:03.446 [167/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:03.446 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:03.446 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:03.446 [170/268] Linking static target lib/librte_net.a 00:02:03.446 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:03.446 [172/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:03.446 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:03.706 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.706 [175/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.706 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:03.706 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:03.706 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:03.706 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:03.706 [180/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.706 [181/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:03.706 [182/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:03.706 [183/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:03.706 [184/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:03.706 [185/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:03.706 [186/268] Linking static target lib/librte_power.a 00:02:03.706 [187/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:03.964 [188/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:03.964 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:03.964 [190/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:03.964 [191/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:03.964 [192/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:03.964 [193/268] Linking static target drivers/librte_bus_vdev.a 00:02:03.964 [194/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:03.964 [195/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:03.964 [196/268] Linking static target drivers/librte_bus_pci.a 00:02:03.965 [197/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:03.965 [198/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:03.965 [199/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:03.965 [200/268] Linking static target lib/librte_hash.a 00:02:03.965 [201/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:03.965 [202/268] Linking static target lib/librte_compressdev.a 00:02:04.223 [203/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.223 [204/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:04.223 [205/268] Linking static target lib/librte_reorder.a 00:02:04.223 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:04.223 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.223 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.223 [209/268] Linking static target drivers/librte_mempool_ring.a 00:02:04.223 [210/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.223 [211/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:04.481 [212/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.481 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.481 [214/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.481 [215/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.047 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:05.047 [217/268] Linking static target lib/librte_security.a 00:02:05.304 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.304 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:06.238 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:06.238 [221/268] Linking static target lib/librte_mbuf.a 00:02:06.496 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:06.496 [223/268] Linking static target lib/librte_cryptodev.a 00:02:06.496 [224/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.430 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:07.430 [226/268] Linking static target lib/librte_ethdev.a 00:02:07.430 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.821 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.821 [229/268] Linking target lib/librte_eal.so.24.1 00:02:08.821 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:08.821 [231/268] Linking target lib/librte_meter.so.24.1 00:02:08.821 [232/268] Linking target lib/librte_ring.so.24.1 00:02:08.821 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:08.821 [234/268] Linking target lib/librte_pci.so.24.1 00:02:08.821 [235/268] Linking target lib/librte_timer.so.24.1 00:02:08.821 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:09.079 [237/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:09.079 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:09.079 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:09.079 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:09.079 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:09.079 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:09.079 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:09.079 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:09.079 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:09.079 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:09.337 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:09.337 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:09.337 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:09.337 [250/268] Linking target lib/librte_reorder.so.24.1 00:02:09.337 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:09.337 [252/268] Linking target lib/librte_net.so.24.1 00:02:09.337 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:09.625 [254/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:09.625 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:09.625 [256/268] Linking target lib/librte_hash.so.24.1 00:02:09.625 [257/268] Linking target lib/librte_security.so.24.1 00:02:09.625 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:09.625 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:10.199 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:11.572 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.572 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:11.572 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:11.830 [264/268] Linking target lib/librte_power.so.24.1 00:02:33.758 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:33.758 [266/268] Linking static target lib/librte_vhost.a 00:02:33.758 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.758 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:33.758 INFO: autodetecting backend as ninja 00:02:33.758 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:34.692 CC lib/log/log.o 00:02:34.692 CC lib/log/log_flags.o 00:02:34.692 CC lib/log/log_deprecated.o 00:02:34.692 CC lib/ut_mock/mock.o 00:02:34.692 CC lib/ut/ut.o 00:02:34.949 LIB libspdk_ut_mock.a 00:02:34.949 LIB libspdk_log.a 00:02:34.949 SO libspdk_ut_mock.so.6.0 00:02:34.949 LIB libspdk_ut.a 00:02:34.949 SO libspdk_log.so.7.0 00:02:34.949 SO libspdk_ut.so.2.0 00:02:34.949 SYMLINK libspdk_ut_mock.so 00:02:34.949 SYMLINK libspdk_ut.so 00:02:34.949 SYMLINK libspdk_log.so 00:02:35.206 CC lib/dma/dma.o 00:02:35.206 CXX lib/trace_parser/trace.o 00:02:35.206 CC lib/ioat/ioat.o 00:02:35.206 CC lib/util/base64.o 00:02:35.206 CC lib/util/bit_array.o 00:02:35.206 CC lib/util/cpuset.o 00:02:35.206 CC lib/util/crc16.o 00:02:35.206 CC lib/util/crc32.o 00:02:35.206 CC lib/util/crc32c.o 00:02:35.206 CC lib/util/crc32_ieee.o 00:02:35.206 CC lib/util/crc64.o 00:02:35.206 CC lib/util/dif.o 00:02:35.206 CC lib/util/fd.o 00:02:35.206 CC lib/util/file.o 00:02:35.206 CC lib/util/hexlify.o 00:02:35.206 CC lib/util/iov.o 00:02:35.206 CC lib/util/math.o 00:02:35.206 CC lib/util/pipe.o 00:02:35.206 CC lib/util/strerror_tls.o 00:02:35.206 CC lib/util/string.o 00:02:35.206 CC lib/util/uuid.o 00:02:35.206 CC lib/util/fd_group.o 00:02:35.206 CC lib/util/xor.o 00:02:35.206 CC lib/util/zipf.o 00:02:35.206 CC lib/vfio_user/host/vfio_user_pci.o 00:02:35.206 CC lib/vfio_user/host/vfio_user.o 00:02:35.464 LIB libspdk_dma.a 00:02:35.464 SO libspdk_dma.so.4.0 00:02:35.464 SYMLINK libspdk_dma.so 00:02:35.464 LIB libspdk_ioat.a 00:02:35.464 SO libspdk_ioat.so.7.0 00:02:35.464 LIB libspdk_vfio_user.a 00:02:35.464 SYMLINK libspdk_ioat.so 00:02:35.464 SO libspdk_vfio_user.so.5.0 00:02:35.722 SYMLINK libspdk_vfio_user.so 00:02:35.722 LIB libspdk_util.a 00:02:35.978 SO libspdk_util.so.9.1 00:02:35.979 SYMLINK libspdk_util.so 00:02:36.236 CC lib/conf/conf.o 00:02:36.236 CC lib/json/json_parse.o 00:02:36.236 CC lib/rdma_utils/rdma_utils.o 00:02:36.236 CC lib/rdma_provider/common.o 00:02:36.236 CC lib/idxd/idxd.o 00:02:36.236 CC lib/json/json_util.o 00:02:36.236 CC lib/vmd/vmd.o 00:02:36.236 CC lib/idxd/idxd_user.o 00:02:36.236 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:36.236 CC lib/env_dpdk/env.o 00:02:36.236 CC lib/json/json_write.o 00:02:36.236 CC lib/vmd/led.o 00:02:36.236 CC lib/env_dpdk/memory.o 00:02:36.236 CC lib/idxd/idxd_kernel.o 00:02:36.236 CC lib/env_dpdk/pci.o 00:02:36.236 CC lib/env_dpdk/init.o 00:02:36.236 CC lib/env_dpdk/threads.o 00:02:36.236 CC lib/env_dpdk/pci_ioat.o 00:02:36.236 CC lib/env_dpdk/pci_virtio.o 00:02:36.236 CC lib/env_dpdk/pci_vmd.o 00:02:36.236 CC lib/env_dpdk/pci_idxd.o 00:02:36.236 CC lib/env_dpdk/pci_event.o 00:02:36.236 CC lib/env_dpdk/pci_dpdk.o 00:02:36.236 CC lib/env_dpdk/sigbus_handler.o 00:02:36.236 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:36.236 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:36.236 LIB libspdk_trace_parser.a 00:02:36.236 SO libspdk_trace_parser.so.5.0 00:02:36.494 SYMLINK libspdk_trace_parser.so 00:02:36.494 LIB libspdk_rdma_provider.a 00:02:36.494 SO libspdk_rdma_provider.so.6.0 00:02:36.494 LIB libspdk_conf.a 00:02:36.494 SO libspdk_conf.so.6.0 00:02:36.494 SYMLINK libspdk_rdma_provider.so 00:02:36.494 LIB libspdk_rdma_utils.a 00:02:36.494 SYMLINK libspdk_conf.so 00:02:36.494 LIB libspdk_json.a 00:02:36.494 SO libspdk_rdma_utils.so.1.0 00:02:36.494 SO libspdk_json.so.6.0 00:02:36.752 SYMLINK libspdk_rdma_utils.so 00:02:36.752 SYMLINK libspdk_json.so 00:02:36.752 CC lib/jsonrpc/jsonrpc_server.o 00:02:36.752 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:36.752 CC lib/jsonrpc/jsonrpc_client.o 00:02:36.752 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:37.009 LIB libspdk_idxd.a 00:02:37.010 SO libspdk_idxd.so.12.0 00:02:37.010 SYMLINK libspdk_idxd.so 00:02:37.267 LIB libspdk_jsonrpc.a 00:02:37.267 SO libspdk_jsonrpc.so.6.0 00:02:37.267 LIB libspdk_vmd.a 00:02:37.267 SO libspdk_vmd.so.6.0 00:02:37.267 SYMLINK libspdk_jsonrpc.so 00:02:37.267 SYMLINK libspdk_vmd.so 00:02:37.525 CC lib/rpc/rpc.o 00:02:37.781 LIB libspdk_rpc.a 00:02:37.781 SO libspdk_rpc.so.6.0 00:02:37.781 SYMLINK libspdk_rpc.so 00:02:38.038 CC lib/trace/trace.o 00:02:38.038 CC lib/trace/trace_flags.o 00:02:38.038 CC lib/notify/notify.o 00:02:38.038 CC lib/keyring/keyring.o 00:02:38.038 CC lib/trace/trace_rpc.o 00:02:38.038 CC lib/notify/notify_rpc.o 00:02:38.038 CC lib/keyring/keyring_rpc.o 00:02:38.038 LIB libspdk_notify.a 00:02:38.038 SO libspdk_notify.so.6.0 00:02:38.305 SYMLINK libspdk_notify.so 00:02:38.305 LIB libspdk_keyring.a 00:02:38.305 SO libspdk_keyring.so.1.0 00:02:38.305 LIB libspdk_trace.a 00:02:38.305 SO libspdk_trace.so.10.0 00:02:38.305 SYMLINK libspdk_keyring.so 00:02:38.305 SYMLINK libspdk_trace.so 00:02:38.571 CC lib/thread/thread.o 00:02:38.571 CC lib/thread/iobuf.o 00:02:38.571 CC lib/sock/sock.o 00:02:38.571 CC lib/sock/sock_rpc.o 00:02:38.829 LIB libspdk_sock.a 00:02:39.086 SO libspdk_sock.so.10.0 00:02:39.086 SYMLINK libspdk_sock.so 00:02:39.086 LIB libspdk_env_dpdk.a 00:02:39.086 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:39.086 CC lib/nvme/nvme_ctrlr.o 00:02:39.086 CC lib/nvme/nvme_fabric.o 00:02:39.086 CC lib/nvme/nvme_ns_cmd.o 00:02:39.086 CC lib/nvme/nvme_ns.o 00:02:39.086 CC lib/nvme/nvme_pcie_common.o 00:02:39.086 CC lib/nvme/nvme_pcie.o 00:02:39.086 CC lib/nvme/nvme_qpair.o 00:02:39.086 CC lib/nvme/nvme.o 00:02:39.086 CC lib/nvme/nvme_quirks.o 00:02:39.086 CC lib/nvme/nvme_transport.o 00:02:39.086 CC lib/nvme/nvme_discovery.o 00:02:39.086 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:39.086 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:39.086 CC lib/nvme/nvme_tcp.o 00:02:39.086 CC lib/nvme/nvme_opal.o 00:02:39.086 CC lib/nvme/nvme_io_msg.o 00:02:39.086 CC lib/nvme/nvme_poll_group.o 00:02:39.086 CC lib/nvme/nvme_zns.o 00:02:39.086 CC lib/nvme/nvme_stubs.o 00:02:39.086 CC lib/nvme/nvme_auth.o 00:02:39.086 CC lib/nvme/nvme_rdma.o 00:02:39.086 CC lib/nvme/nvme_cuse.o 00:02:39.343 SO libspdk_env_dpdk.so.14.1 00:02:39.344 SYMLINK libspdk_env_dpdk.so 00:02:40.719 LIB libspdk_thread.a 00:02:40.719 SO libspdk_thread.so.10.1 00:02:40.719 SYMLINK libspdk_thread.so 00:02:40.719 CC lib/init/json_config.o 00:02:40.719 CC lib/virtio/virtio.o 00:02:40.719 CC lib/init/subsystem.o 00:02:40.719 CC lib/accel/accel.o 00:02:40.719 CC lib/blob/blobstore.o 00:02:40.719 CC lib/blob/request.o 00:02:40.719 CC lib/virtio/virtio_vhost_user.o 00:02:40.719 CC lib/accel/accel_rpc.o 00:02:40.719 CC lib/blob/zeroes.o 00:02:40.719 CC lib/init/subsystem_rpc.o 00:02:40.719 CC lib/virtio/virtio_vfio_user.o 00:02:40.719 CC lib/accel/accel_sw.o 00:02:40.719 CC lib/blob/blob_bs_dev.o 00:02:40.719 CC lib/init/rpc.o 00:02:40.719 CC lib/virtio/virtio_pci.o 00:02:41.001 LIB libspdk_init.a 00:02:41.001 SO libspdk_init.so.5.0 00:02:41.262 SYMLINK libspdk_init.so 00:02:41.262 LIB libspdk_virtio.a 00:02:41.262 SO libspdk_virtio.so.7.0 00:02:41.262 SYMLINK libspdk_virtio.so 00:02:41.262 CC lib/event/app.o 00:02:41.262 CC lib/event/reactor.o 00:02:41.262 CC lib/event/log_rpc.o 00:02:41.262 CC lib/event/app_rpc.o 00:02:41.262 CC lib/event/scheduler_static.o 00:02:41.826 LIB libspdk_event.a 00:02:41.826 SO libspdk_event.so.14.0 00:02:42.084 SYMLINK libspdk_event.so 00:02:42.084 LIB libspdk_accel.a 00:02:42.084 LIB libspdk_nvme.a 00:02:42.084 SO libspdk_accel.so.15.1 00:02:42.084 SYMLINK libspdk_accel.so 00:02:42.084 SO libspdk_nvme.so.13.1 00:02:42.341 CC lib/bdev/bdev.o 00:02:42.341 CC lib/bdev/bdev_rpc.o 00:02:42.341 CC lib/bdev/bdev_zone.o 00:02:42.341 CC lib/bdev/part.o 00:02:42.341 CC lib/bdev/scsi_nvme.o 00:02:42.599 SYMLINK libspdk_nvme.so 00:02:45.130 LIB libspdk_blob.a 00:02:45.130 SO libspdk_blob.so.11.0 00:02:45.130 SYMLINK libspdk_blob.so 00:02:45.130 CC lib/blobfs/blobfs.o 00:02:45.130 CC lib/blobfs/tree.o 00:02:45.130 CC lib/lvol/lvol.o 00:02:45.697 LIB libspdk_bdev.a 00:02:45.697 SO libspdk_bdev.so.15.1 00:02:45.966 SYMLINK libspdk_bdev.so 00:02:45.966 CC lib/nbd/nbd.o 00:02:45.966 CC lib/ublk/ublk.o 00:02:45.966 CC lib/nbd/nbd_rpc.o 00:02:45.966 CC lib/nvmf/ctrlr.o 00:02:45.966 CC lib/ublk/ublk_rpc.o 00:02:45.966 CC lib/scsi/dev.o 00:02:45.966 CC lib/nvmf/ctrlr_discovery.o 00:02:45.966 CC lib/nvmf/ctrlr_bdev.o 00:02:45.966 CC lib/scsi/lun.o 00:02:45.966 CC lib/nvmf/subsystem.o 00:02:45.966 CC lib/ftl/ftl_core.o 00:02:45.966 CC lib/scsi/port.o 00:02:45.966 CC lib/ftl/ftl_init.o 00:02:45.966 CC lib/nvmf/nvmf.o 00:02:45.966 CC lib/scsi/scsi.o 00:02:45.966 CC lib/nvmf/nvmf_rpc.o 00:02:45.966 CC lib/ftl/ftl_layout.o 00:02:45.966 CC lib/scsi/scsi_bdev.o 00:02:45.967 CC lib/ftl/ftl_debug.o 00:02:45.967 CC lib/nvmf/transport.o 00:02:45.967 CC lib/scsi/scsi_pr.o 00:02:45.967 CC lib/nvmf/tcp.o 00:02:45.967 CC lib/ftl/ftl_io.o 00:02:45.967 CC lib/scsi/scsi_rpc.o 00:02:45.967 CC lib/ftl/ftl_sb.o 00:02:45.967 CC lib/nvmf/stubs.o 00:02:45.967 CC lib/nvmf/mdns_server.o 00:02:45.967 CC lib/scsi/task.o 00:02:45.967 CC lib/ftl/ftl_l2p.o 00:02:45.967 CC lib/ftl/ftl_l2p_flat.o 00:02:45.967 CC lib/nvmf/rdma.o 00:02:45.967 CC lib/nvmf/auth.o 00:02:45.967 CC lib/ftl/ftl_nv_cache.o 00:02:45.967 CC lib/ftl/ftl_band.o 00:02:45.967 CC lib/ftl/ftl_band_ops.o 00:02:45.967 CC lib/ftl/ftl_writer.o 00:02:45.967 CC lib/ftl/ftl_rq.o 00:02:45.967 CC lib/ftl/ftl_l2p_cache.o 00:02:45.967 CC lib/ftl/ftl_reloc.o 00:02:45.967 CC lib/ftl/ftl_p2l.o 00:02:45.967 CC lib/ftl/mngt/ftl_mngt.o 00:02:45.967 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:45.967 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:45.967 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:45.967 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:45.967 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:46.226 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:46.488 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:46.488 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:46.488 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:46.488 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:46.488 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:46.488 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:46.488 CC lib/ftl/utils/ftl_conf.o 00:02:46.488 CC lib/ftl/utils/ftl_md.o 00:02:46.488 CC lib/ftl/utils/ftl_mempool.o 00:02:46.488 CC lib/ftl/utils/ftl_bitmap.o 00:02:46.488 CC lib/ftl/utils/ftl_property.o 00:02:46.488 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:46.488 LIB libspdk_blobfs.a 00:02:46.488 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:46.488 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:46.488 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:46.488 SO libspdk_blobfs.so.10.0 00:02:46.746 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:46.746 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:46.746 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:46.746 SYMLINK libspdk_blobfs.so 00:02:46.746 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:46.746 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:46.746 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:46.746 LIB libspdk_lvol.a 00:02:46.746 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:46.746 CC lib/ftl/base/ftl_base_dev.o 00:02:46.746 CC lib/ftl/base/ftl_base_bdev.o 00:02:46.746 CC lib/ftl/ftl_trace.o 00:02:46.746 SO libspdk_lvol.so.10.0 00:02:47.004 SYMLINK libspdk_lvol.so 00:02:47.004 LIB libspdk_nbd.a 00:02:47.004 SO libspdk_nbd.so.7.0 00:02:47.004 SYMLINK libspdk_nbd.so 00:02:47.262 LIB libspdk_scsi.a 00:02:47.262 SO libspdk_scsi.so.9.0 00:02:47.262 LIB libspdk_ublk.a 00:02:47.262 SYMLINK libspdk_scsi.so 00:02:47.262 SO libspdk_ublk.so.3.0 00:02:47.262 SYMLINK libspdk_ublk.so 00:02:47.520 CC lib/iscsi/conn.o 00:02:47.520 CC lib/vhost/vhost.o 00:02:47.520 CC lib/vhost/vhost_rpc.o 00:02:47.520 CC lib/iscsi/init_grp.o 00:02:47.520 CC lib/iscsi/iscsi.o 00:02:47.520 CC lib/vhost/vhost_scsi.o 00:02:47.520 CC lib/iscsi/md5.o 00:02:47.520 CC lib/vhost/vhost_blk.o 00:02:47.520 CC lib/iscsi/param.o 00:02:47.520 CC lib/vhost/rte_vhost_user.o 00:02:47.520 CC lib/iscsi/portal_grp.o 00:02:47.520 CC lib/iscsi/tgt_node.o 00:02:47.520 CC lib/iscsi/iscsi_subsystem.o 00:02:47.520 CC lib/iscsi/iscsi_rpc.o 00:02:47.520 CC lib/iscsi/task.o 00:02:47.780 LIB libspdk_ftl.a 00:02:48.038 SO libspdk_ftl.so.9.0 00:02:48.297 SYMLINK libspdk_ftl.so 00:02:48.861 LIB libspdk_vhost.a 00:02:48.861 SO libspdk_vhost.so.8.0 00:02:48.861 SYMLINK libspdk_vhost.so 00:02:49.425 LIB libspdk_iscsi.a 00:02:49.425 LIB libspdk_nvmf.a 00:02:49.425 SO libspdk_iscsi.so.8.0 00:02:49.425 SO libspdk_nvmf.so.18.1 00:02:49.425 SYMLINK libspdk_iscsi.so 00:02:49.683 SYMLINK libspdk_nvmf.so 00:02:49.941 CC module/env_dpdk/env_dpdk_rpc.o 00:02:49.941 CC module/accel/ioat/accel_ioat.o 00:02:49.941 CC module/accel/error/accel_error.o 00:02:49.941 CC module/keyring/file/keyring.o 00:02:49.941 CC module/accel/ioat/accel_ioat_rpc.o 00:02:49.941 CC module/accel/error/accel_error_rpc.o 00:02:49.941 CC module/keyring/file/keyring_rpc.o 00:02:49.941 CC module/sock/posix/posix.o 00:02:49.941 CC module/scheduler/gscheduler/gscheduler.o 00:02:49.941 CC module/accel/dsa/accel_dsa.o 00:02:49.941 CC module/accel/dsa/accel_dsa_rpc.o 00:02:49.941 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:49.941 CC module/keyring/linux/keyring.o 00:02:49.941 CC module/keyring/linux/keyring_rpc.o 00:02:49.941 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:49.941 CC module/accel/iaa/accel_iaa.o 00:02:49.941 CC module/accel/iaa/accel_iaa_rpc.o 00:02:49.941 CC module/blob/bdev/blob_bdev.o 00:02:50.199 LIB libspdk_env_dpdk_rpc.a 00:02:50.199 SO libspdk_env_dpdk_rpc.so.6.0 00:02:50.199 SYMLINK libspdk_env_dpdk_rpc.so 00:02:50.199 LIB libspdk_keyring_file.a 00:02:50.199 LIB libspdk_keyring_linux.a 00:02:50.199 LIB libspdk_scheduler_gscheduler.a 00:02:50.199 LIB libspdk_scheduler_dpdk_governor.a 00:02:50.199 SO libspdk_keyring_file.so.1.0 00:02:50.199 SO libspdk_keyring_linux.so.1.0 00:02:50.199 SO libspdk_scheduler_gscheduler.so.4.0 00:02:50.199 LIB libspdk_accel_error.a 00:02:50.199 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:50.199 LIB libspdk_accel_ioat.a 00:02:50.199 SO libspdk_accel_error.so.2.0 00:02:50.199 LIB libspdk_scheduler_dynamic.a 00:02:50.199 LIB libspdk_accel_iaa.a 00:02:50.199 SYMLINK libspdk_keyring_file.so 00:02:50.199 SYMLINK libspdk_keyring_linux.so 00:02:50.199 SO libspdk_accel_ioat.so.6.0 00:02:50.199 SYMLINK libspdk_scheduler_gscheduler.so 00:02:50.199 SO libspdk_scheduler_dynamic.so.4.0 00:02:50.199 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:50.199 SO libspdk_accel_iaa.so.3.0 00:02:50.199 SYMLINK libspdk_accel_error.so 00:02:50.457 SYMLINK libspdk_accel_ioat.so 00:02:50.457 LIB libspdk_accel_dsa.a 00:02:50.457 SYMLINK libspdk_scheduler_dynamic.so 00:02:50.457 LIB libspdk_blob_bdev.a 00:02:50.457 SYMLINK libspdk_accel_iaa.so 00:02:50.457 SO libspdk_accel_dsa.so.5.0 00:02:50.457 SO libspdk_blob_bdev.so.11.0 00:02:50.457 SYMLINK libspdk_accel_dsa.so 00:02:50.457 SYMLINK libspdk_blob_bdev.so 00:02:50.716 CC module/blobfs/bdev/blobfs_bdev.o 00:02:50.716 CC module/bdev/error/vbdev_error.o 00:02:50.716 CC module/bdev/malloc/bdev_malloc.o 00:02:50.716 CC module/bdev/lvol/vbdev_lvol.o 00:02:50.716 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:50.716 CC module/bdev/error/vbdev_error_rpc.o 00:02:50.716 CC module/bdev/gpt/gpt.o 00:02:50.716 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:50.716 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:50.716 CC module/bdev/passthru/vbdev_passthru.o 00:02:50.716 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:50.716 CC module/bdev/gpt/vbdev_gpt.o 00:02:50.716 CC module/bdev/raid/bdev_raid.o 00:02:50.716 CC module/bdev/split/vbdev_split.o 00:02:50.716 CC module/bdev/raid/bdev_raid_rpc.o 00:02:50.716 CC module/bdev/nvme/bdev_nvme.o 00:02:50.716 CC module/bdev/raid/bdev_raid_sb.o 00:02:50.716 CC module/bdev/delay/vbdev_delay.o 00:02:50.716 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:50.716 CC module/bdev/split/vbdev_split_rpc.o 00:02:50.716 CC module/bdev/null/bdev_null.o 00:02:50.716 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:50.716 CC module/bdev/raid/raid0.o 00:02:50.716 CC module/bdev/nvme/nvme_rpc.o 00:02:50.716 CC module/bdev/iscsi/bdev_iscsi.o 00:02:50.716 CC module/bdev/null/bdev_null_rpc.o 00:02:50.716 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:50.716 CC module/bdev/aio/bdev_aio.o 00:02:50.716 CC module/bdev/raid/concat.o 00:02:50.716 CC module/bdev/nvme/bdev_mdns_client.o 00:02:50.716 CC module/bdev/raid/raid1.o 00:02:50.716 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:50.716 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:50.716 CC module/bdev/ftl/bdev_ftl.o 00:02:50.716 CC module/bdev/aio/bdev_aio_rpc.o 00:02:50.716 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:50.716 CC module/bdev/nvme/vbdev_opal.o 00:02:50.716 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:50.716 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:50.716 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:50.716 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:50.716 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:50.974 LIB libspdk_blobfs_bdev.a 00:02:50.974 LIB libspdk_bdev_null.a 00:02:51.232 SO libspdk_blobfs_bdev.so.6.0 00:02:51.232 SO libspdk_bdev_null.so.6.0 00:02:51.232 LIB libspdk_bdev_passthru.a 00:02:51.232 SYMLINK libspdk_blobfs_bdev.so 00:02:51.232 LIB libspdk_bdev_split.a 00:02:51.232 SO libspdk_bdev_passthru.so.6.0 00:02:51.232 SYMLINK libspdk_bdev_null.so 00:02:51.232 LIB libspdk_sock_posix.a 00:02:51.232 SO libspdk_bdev_split.so.6.0 00:02:51.232 LIB libspdk_bdev_gpt.a 00:02:51.232 LIB libspdk_bdev_error.a 00:02:51.232 SO libspdk_sock_posix.so.6.0 00:02:51.232 SO libspdk_bdev_gpt.so.6.0 00:02:51.232 SO libspdk_bdev_error.so.6.0 00:02:51.232 SYMLINK libspdk_bdev_passthru.so 00:02:51.232 SYMLINK libspdk_bdev_split.so 00:02:51.232 SYMLINK libspdk_bdev_gpt.so 00:02:51.232 LIB libspdk_bdev_ftl.a 00:02:51.232 SYMLINK libspdk_sock_posix.so 00:02:51.232 SYMLINK libspdk_bdev_error.so 00:02:51.232 LIB libspdk_bdev_malloc.a 00:02:51.232 SO libspdk_bdev_ftl.so.6.0 00:02:51.232 LIB libspdk_bdev_aio.a 00:02:51.232 LIB libspdk_bdev_iscsi.a 00:02:51.232 LIB libspdk_bdev_zone_block.a 00:02:51.491 SO libspdk_bdev_malloc.so.6.0 00:02:51.491 SO libspdk_bdev_iscsi.so.6.0 00:02:51.491 SO libspdk_bdev_aio.so.6.0 00:02:51.491 SO libspdk_bdev_zone_block.so.6.0 00:02:51.491 LIB libspdk_bdev_delay.a 00:02:51.491 SYMLINK libspdk_bdev_ftl.so 00:02:51.491 SYMLINK libspdk_bdev_malloc.so 00:02:51.491 SYMLINK libspdk_bdev_aio.so 00:02:51.491 SYMLINK libspdk_bdev_iscsi.so 00:02:51.491 SO libspdk_bdev_delay.so.6.0 00:02:51.491 SYMLINK libspdk_bdev_zone_block.so 00:02:51.491 LIB libspdk_bdev_virtio.a 00:02:51.491 SO libspdk_bdev_virtio.so.6.0 00:02:51.491 SYMLINK libspdk_bdev_delay.so 00:02:51.491 SYMLINK libspdk_bdev_virtio.so 00:02:51.491 LIB libspdk_bdev_lvol.a 00:02:51.491 SO libspdk_bdev_lvol.so.6.0 00:02:51.750 SYMLINK libspdk_bdev_lvol.so 00:02:52.316 LIB libspdk_bdev_raid.a 00:02:52.316 SO libspdk_bdev_raid.so.6.0 00:02:52.316 SYMLINK libspdk_bdev_raid.so 00:02:53.692 LIB libspdk_bdev_nvme.a 00:02:53.692 SO libspdk_bdev_nvme.so.7.0 00:02:53.692 SYMLINK libspdk_bdev_nvme.so 00:02:54.258 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:54.258 CC module/event/subsystems/sock/sock.o 00:02:54.258 CC module/event/subsystems/scheduler/scheduler.o 00:02:54.258 CC module/event/subsystems/vmd/vmd.o 00:02:54.258 CC module/event/subsystems/iobuf/iobuf.o 00:02:54.258 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:54.258 CC module/event/subsystems/keyring/keyring.o 00:02:54.258 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:54.258 LIB libspdk_event_keyring.a 00:02:54.258 LIB libspdk_event_vhost_blk.a 00:02:54.258 LIB libspdk_event_scheduler.a 00:02:54.258 LIB libspdk_event_sock.a 00:02:54.258 LIB libspdk_event_vmd.a 00:02:54.258 LIB libspdk_event_iobuf.a 00:02:54.258 SO libspdk_event_keyring.so.1.0 00:02:54.258 SO libspdk_event_vhost_blk.so.3.0 00:02:54.258 SO libspdk_event_scheduler.so.4.0 00:02:54.258 SO libspdk_event_sock.so.5.0 00:02:54.258 SO libspdk_event_vmd.so.6.0 00:02:54.258 SO libspdk_event_iobuf.so.3.0 00:02:54.258 SYMLINK libspdk_event_keyring.so 00:02:54.258 SYMLINK libspdk_event_vhost_blk.so 00:02:54.258 SYMLINK libspdk_event_scheduler.so 00:02:54.258 SYMLINK libspdk_event_sock.so 00:02:54.258 SYMLINK libspdk_event_vmd.so 00:02:54.258 SYMLINK libspdk_event_iobuf.so 00:02:54.516 CC module/event/subsystems/accel/accel.o 00:02:54.774 LIB libspdk_event_accel.a 00:02:54.774 SO libspdk_event_accel.so.6.0 00:02:54.774 SYMLINK libspdk_event_accel.so 00:02:55.032 CC module/event/subsystems/bdev/bdev.o 00:02:55.032 LIB libspdk_event_bdev.a 00:02:55.032 SO libspdk_event_bdev.so.6.0 00:02:55.325 SYMLINK libspdk_event_bdev.so 00:02:55.325 CC module/event/subsystems/ublk/ublk.o 00:02:55.325 CC module/event/subsystems/scsi/scsi.o 00:02:55.325 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:55.325 CC module/event/subsystems/nbd/nbd.o 00:02:55.325 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:55.583 LIB libspdk_event_ublk.a 00:02:55.583 LIB libspdk_event_nbd.a 00:02:55.583 LIB libspdk_event_scsi.a 00:02:55.583 SO libspdk_event_ublk.so.3.0 00:02:55.583 SO libspdk_event_nbd.so.6.0 00:02:55.583 SO libspdk_event_scsi.so.6.0 00:02:55.583 SYMLINK libspdk_event_ublk.so 00:02:55.583 SYMLINK libspdk_event_nbd.so 00:02:55.583 SYMLINK libspdk_event_scsi.so 00:02:55.583 LIB libspdk_event_nvmf.a 00:02:55.583 SO libspdk_event_nvmf.so.6.0 00:02:55.583 SYMLINK libspdk_event_nvmf.so 00:02:55.841 CC module/event/subsystems/iscsi/iscsi.o 00:02:55.841 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:55.841 LIB libspdk_event_vhost_scsi.a 00:02:55.841 LIB libspdk_event_iscsi.a 00:02:55.841 SO libspdk_event_vhost_scsi.so.3.0 00:02:55.841 SO libspdk_event_iscsi.so.6.0 00:02:56.099 SYMLINK libspdk_event_vhost_scsi.so 00:02:56.099 SYMLINK libspdk_event_iscsi.so 00:02:56.099 SO libspdk.so.6.0 00:02:56.099 SYMLINK libspdk.so 00:02:56.365 CC test/rpc_client/rpc_client_test.o 00:02:56.365 TEST_HEADER include/spdk/accel.h 00:02:56.365 TEST_HEADER include/spdk/accel_module.h 00:02:56.365 TEST_HEADER include/spdk/assert.h 00:02:56.365 TEST_HEADER include/spdk/barrier.h 00:02:56.365 TEST_HEADER include/spdk/base64.h 00:02:56.365 TEST_HEADER include/spdk/bdev.h 00:02:56.365 CXX app/trace/trace.o 00:02:56.365 CC app/trace_record/trace_record.o 00:02:56.365 TEST_HEADER include/spdk/bdev_module.h 00:02:56.365 TEST_HEADER include/spdk/bdev_zone.h 00:02:56.365 CC app/spdk_nvme_identify/identify.o 00:02:56.365 TEST_HEADER include/spdk/bit_array.h 00:02:56.365 TEST_HEADER include/spdk/bit_pool.h 00:02:56.365 CC app/spdk_top/spdk_top.o 00:02:56.365 TEST_HEADER include/spdk/blob_bdev.h 00:02:56.365 CC app/spdk_lspci/spdk_lspci.o 00:02:56.365 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:56.365 TEST_HEADER include/spdk/blobfs.h 00:02:56.365 CC app/spdk_nvme_perf/perf.o 00:02:56.365 TEST_HEADER include/spdk/blob.h 00:02:56.365 CC app/spdk_nvme_discover/discovery_aer.o 00:02:56.365 TEST_HEADER include/spdk/config.h 00:02:56.365 TEST_HEADER include/spdk/conf.h 00:02:56.365 TEST_HEADER include/spdk/cpuset.h 00:02:56.365 TEST_HEADER include/spdk/crc16.h 00:02:56.365 TEST_HEADER include/spdk/crc64.h 00:02:56.365 TEST_HEADER include/spdk/crc32.h 00:02:56.365 TEST_HEADER include/spdk/dif.h 00:02:56.365 TEST_HEADER include/spdk/dma.h 00:02:56.365 TEST_HEADER include/spdk/env_dpdk.h 00:02:56.365 TEST_HEADER include/spdk/endian.h 00:02:56.365 TEST_HEADER include/spdk/env.h 00:02:56.365 TEST_HEADER include/spdk/event.h 00:02:56.365 TEST_HEADER include/spdk/fd_group.h 00:02:56.365 TEST_HEADER include/spdk/fd.h 00:02:56.365 TEST_HEADER include/spdk/file.h 00:02:56.365 TEST_HEADER include/spdk/ftl.h 00:02:56.365 TEST_HEADER include/spdk/gpt_spec.h 00:02:56.365 TEST_HEADER include/spdk/hexlify.h 00:02:56.365 TEST_HEADER include/spdk/histogram_data.h 00:02:56.365 TEST_HEADER include/spdk/idxd.h 00:02:56.365 TEST_HEADER include/spdk/idxd_spec.h 00:02:56.365 TEST_HEADER include/spdk/init.h 00:02:56.365 TEST_HEADER include/spdk/ioat_spec.h 00:02:56.365 TEST_HEADER include/spdk/ioat.h 00:02:56.365 TEST_HEADER include/spdk/iscsi_spec.h 00:02:56.366 TEST_HEADER include/spdk/json.h 00:02:56.366 TEST_HEADER include/spdk/jsonrpc.h 00:02:56.366 TEST_HEADER include/spdk/keyring.h 00:02:56.366 TEST_HEADER include/spdk/keyring_module.h 00:02:56.366 TEST_HEADER include/spdk/likely.h 00:02:56.366 TEST_HEADER include/spdk/lvol.h 00:02:56.366 TEST_HEADER include/spdk/log.h 00:02:56.366 TEST_HEADER include/spdk/memory.h 00:02:56.366 TEST_HEADER include/spdk/mmio.h 00:02:56.366 TEST_HEADER include/spdk/notify.h 00:02:56.366 TEST_HEADER include/spdk/nbd.h 00:02:56.366 TEST_HEADER include/spdk/nvme.h 00:02:56.366 TEST_HEADER include/spdk/nvme_intel.h 00:02:56.366 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:56.366 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:56.366 TEST_HEADER include/spdk/nvme_spec.h 00:02:56.366 TEST_HEADER include/spdk/nvme_zns.h 00:02:56.366 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:56.366 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:56.366 TEST_HEADER include/spdk/nvmf.h 00:02:56.366 TEST_HEADER include/spdk/nvmf_spec.h 00:02:56.366 TEST_HEADER include/spdk/nvmf_transport.h 00:02:56.366 TEST_HEADER include/spdk/opal.h 00:02:56.366 TEST_HEADER include/spdk/opal_spec.h 00:02:56.366 TEST_HEADER include/spdk/pci_ids.h 00:02:56.366 TEST_HEADER include/spdk/queue.h 00:02:56.366 TEST_HEADER include/spdk/pipe.h 00:02:56.366 TEST_HEADER include/spdk/reduce.h 00:02:56.366 TEST_HEADER include/spdk/scheduler.h 00:02:56.366 TEST_HEADER include/spdk/rpc.h 00:02:56.366 TEST_HEADER include/spdk/scsi.h 00:02:56.366 TEST_HEADER include/spdk/scsi_spec.h 00:02:56.366 TEST_HEADER include/spdk/sock.h 00:02:56.366 TEST_HEADER include/spdk/stdinc.h 00:02:56.366 TEST_HEADER include/spdk/string.h 00:02:56.366 TEST_HEADER include/spdk/thread.h 00:02:56.366 TEST_HEADER include/spdk/trace.h 00:02:56.366 TEST_HEADER include/spdk/tree.h 00:02:56.366 TEST_HEADER include/spdk/trace_parser.h 00:02:56.366 TEST_HEADER include/spdk/ublk.h 00:02:56.366 TEST_HEADER include/spdk/util.h 00:02:56.366 TEST_HEADER include/spdk/uuid.h 00:02:56.366 TEST_HEADER include/spdk/version.h 00:02:56.366 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:56.366 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:56.366 TEST_HEADER include/spdk/vhost.h 00:02:56.366 TEST_HEADER include/spdk/vmd.h 00:02:56.366 TEST_HEADER include/spdk/xor.h 00:02:56.366 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:56.366 TEST_HEADER include/spdk/zipf.h 00:02:56.366 CXX test/cpp_headers/accel.o 00:02:56.366 CXX test/cpp_headers/accel_module.o 00:02:56.366 CXX test/cpp_headers/assert.o 00:02:56.366 CXX test/cpp_headers/barrier.o 00:02:56.366 CXX test/cpp_headers/base64.o 00:02:56.366 CXX test/cpp_headers/bdev.o 00:02:56.366 CXX test/cpp_headers/bdev_module.o 00:02:56.366 CXX test/cpp_headers/bdev_zone.o 00:02:56.366 CC app/spdk_dd/spdk_dd.o 00:02:56.366 CXX test/cpp_headers/bit_array.o 00:02:56.366 CXX test/cpp_headers/blob_bdev.o 00:02:56.366 CXX test/cpp_headers/bit_pool.o 00:02:56.366 CXX test/cpp_headers/blobfs_bdev.o 00:02:56.366 CXX test/cpp_headers/blobfs.o 00:02:56.366 CXX test/cpp_headers/blob.o 00:02:56.366 CXX test/cpp_headers/conf.o 00:02:56.366 CXX test/cpp_headers/config.o 00:02:56.366 CXX test/cpp_headers/cpuset.o 00:02:56.366 CXX test/cpp_headers/crc16.o 00:02:56.366 CC app/nvmf_tgt/nvmf_main.o 00:02:56.366 CC app/iscsi_tgt/iscsi_tgt.o 00:02:56.366 CXX test/cpp_headers/crc32.o 00:02:56.366 CC examples/ioat/perf/perf.o 00:02:56.366 CC test/env/memory/memory_ut.o 00:02:56.366 CC examples/ioat/verify/verify.o 00:02:56.366 CC examples/util/zipf/zipf.o 00:02:56.366 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:56.366 CC test/app/jsoncat/jsoncat.o 00:02:56.366 CC test/app/histogram_perf/histogram_perf.o 00:02:56.366 CC test/env/pci/pci_ut.o 00:02:56.366 CC test/thread/poller_perf/poller_perf.o 00:02:56.366 CC app/spdk_tgt/spdk_tgt.o 00:02:56.366 CC test/env/vtophys/vtophys.o 00:02:56.366 CC test/app/stub/stub.o 00:02:56.366 CC app/fio/nvme/fio_plugin.o 00:02:56.626 CC test/app/bdev_svc/bdev_svc.o 00:02:56.626 CC test/dma/test_dma/test_dma.o 00:02:56.626 CC app/fio/bdev/fio_plugin.o 00:02:56.626 LINK spdk_lspci 00:02:56.626 CC test/env/mem_callbacks/mem_callbacks.o 00:02:56.626 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:56.626 LINK rpc_client_test 00:02:56.891 LINK interrupt_tgt 00:02:56.891 LINK jsoncat 00:02:56.891 LINK nvmf_tgt 00:02:56.891 LINK poller_perf 00:02:56.891 LINK histogram_perf 00:02:56.891 CXX test/cpp_headers/crc64.o 00:02:56.891 LINK spdk_nvme_discover 00:02:56.891 LINK vtophys 00:02:56.891 LINK env_dpdk_post_init 00:02:56.891 LINK zipf 00:02:56.891 CXX test/cpp_headers/dif.o 00:02:56.891 CXX test/cpp_headers/dma.o 00:02:56.891 LINK iscsi_tgt 00:02:56.891 CXX test/cpp_headers/endian.o 00:02:56.891 CXX test/cpp_headers/env_dpdk.o 00:02:56.891 CXX test/cpp_headers/env.o 00:02:56.891 CXX test/cpp_headers/event.o 00:02:56.891 CXX test/cpp_headers/fd_group.o 00:02:56.891 CXX test/cpp_headers/fd.o 00:02:56.891 CXX test/cpp_headers/file.o 00:02:56.891 CXX test/cpp_headers/ftl.o 00:02:56.891 CXX test/cpp_headers/gpt_spec.o 00:02:56.891 CXX test/cpp_headers/hexlify.o 00:02:56.891 CXX test/cpp_headers/histogram_data.o 00:02:56.891 LINK bdev_svc 00:02:56.891 LINK stub 00:02:56.891 LINK spdk_trace_record 00:02:56.891 CXX test/cpp_headers/idxd.o 00:02:56.891 CXX test/cpp_headers/idxd_spec.o 00:02:56.891 LINK spdk_tgt 00:02:56.891 CXX test/cpp_headers/init.o 00:02:56.891 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:56.891 LINK ioat_perf 00:02:56.891 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:56.891 LINK verify 00:02:56.891 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:57.153 CXX test/cpp_headers/ioat.o 00:02:57.153 CXX test/cpp_headers/ioat_spec.o 00:02:57.153 CXX test/cpp_headers/iscsi_spec.o 00:02:57.153 CXX test/cpp_headers/json.o 00:02:57.153 CXX test/cpp_headers/jsonrpc.o 00:02:57.153 CXX test/cpp_headers/keyring.o 00:02:57.153 CXX test/cpp_headers/keyring_module.o 00:02:57.153 CXX test/cpp_headers/likely.o 00:02:57.153 CXX test/cpp_headers/log.o 00:02:57.153 LINK spdk_dd 00:02:57.153 CXX test/cpp_headers/lvol.o 00:02:57.153 CXX test/cpp_headers/memory.o 00:02:57.153 CXX test/cpp_headers/mmio.o 00:02:57.153 CXX test/cpp_headers/nbd.o 00:02:57.153 CXX test/cpp_headers/notify.o 00:02:57.153 CXX test/cpp_headers/nvme.o 00:02:57.153 CXX test/cpp_headers/nvme_intel.o 00:02:57.153 CXX test/cpp_headers/nvme_ocssd.o 00:02:57.153 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:57.153 CXX test/cpp_headers/nvme_spec.o 00:02:57.153 LINK spdk_trace 00:02:57.153 CXX test/cpp_headers/nvme_zns.o 00:02:57.419 CXX test/cpp_headers/nvmf_cmd.o 00:02:57.419 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:57.419 CXX test/cpp_headers/nvmf.o 00:02:57.419 CXX test/cpp_headers/nvmf_spec.o 00:02:57.419 CXX test/cpp_headers/nvmf_transport.o 00:02:57.419 CXX test/cpp_headers/opal.o 00:02:57.419 LINK pci_ut 00:02:57.419 LINK test_dma 00:02:57.419 CXX test/cpp_headers/opal_spec.o 00:02:57.419 CC test/event/event_perf/event_perf.o 00:02:57.419 CC test/event/reactor/reactor.o 00:02:57.419 CXX test/cpp_headers/pci_ids.o 00:02:57.419 CC test/event/reactor_perf/reactor_perf.o 00:02:57.419 CXX test/cpp_headers/pipe.o 00:02:57.419 CC examples/sock/hello_world/hello_sock.o 00:02:57.419 CXX test/cpp_headers/queue.o 00:02:57.419 CC test/event/app_repeat/app_repeat.o 00:02:57.419 CXX test/cpp_headers/reduce.o 00:02:57.419 CC examples/idxd/perf/perf.o 00:02:57.682 CC examples/vmd/lsvmd/lsvmd.o 00:02:57.682 CXX test/cpp_headers/rpc.o 00:02:57.682 CC examples/thread/thread/thread_ex.o 00:02:57.682 CXX test/cpp_headers/scheduler.o 00:02:57.682 CXX test/cpp_headers/scsi.o 00:02:57.682 CXX test/cpp_headers/scsi_spec.o 00:02:57.682 CC examples/vmd/led/led.o 00:02:57.682 CXX test/cpp_headers/sock.o 00:02:57.682 CXX test/cpp_headers/stdinc.o 00:02:57.682 CXX test/cpp_headers/string.o 00:02:57.682 CXX test/cpp_headers/thread.o 00:02:57.682 LINK nvme_fuzz 00:02:57.682 CC test/event/scheduler/scheduler.o 00:02:57.682 CXX test/cpp_headers/trace.o 00:02:57.682 CXX test/cpp_headers/trace_parser.o 00:02:57.682 LINK spdk_bdev 00:02:57.682 CXX test/cpp_headers/tree.o 00:02:57.682 CXX test/cpp_headers/ublk.o 00:02:57.683 CXX test/cpp_headers/util.o 00:02:57.683 CXX test/cpp_headers/uuid.o 00:02:57.683 CXX test/cpp_headers/version.o 00:02:57.683 LINK reactor 00:02:57.683 CXX test/cpp_headers/vfio_user_pci.o 00:02:57.683 LINK event_perf 00:02:57.683 CXX test/cpp_headers/vfio_user_spec.o 00:02:57.683 CXX test/cpp_headers/vhost.o 00:02:57.683 CXX test/cpp_headers/vmd.o 00:02:57.683 CXX test/cpp_headers/xor.o 00:02:57.683 CXX test/cpp_headers/zipf.o 00:02:57.683 LINK reactor_perf 00:02:57.947 LINK app_repeat 00:02:57.947 LINK mem_callbacks 00:02:57.947 LINK lsvmd 00:02:57.947 LINK spdk_nvme 00:02:57.947 CC app/vhost/vhost.o 00:02:57.947 LINK led 00:02:57.947 LINK vhost_fuzz 00:02:58.206 LINK hello_sock 00:02:58.206 LINK thread 00:02:58.206 CC test/nvme/reset/reset.o 00:02:58.206 CC test/nvme/sgl/sgl.o 00:02:58.206 CC test/nvme/overhead/overhead.o 00:02:58.206 CC test/nvme/err_injection/err_injection.o 00:02:58.206 CC test/nvme/e2edp/nvme_dp.o 00:02:58.206 CC test/nvme/aer/aer.o 00:02:58.206 CC test/nvme/startup/startup.o 00:02:58.206 CC test/nvme/simple_copy/simple_copy.o 00:02:58.206 CC test/nvme/reserve/reserve.o 00:02:58.206 CC test/blobfs/mkfs/mkfs.o 00:02:58.206 CC test/accel/dif/dif.o 00:02:58.206 CC test/nvme/connect_stress/connect_stress.o 00:02:58.206 CC test/nvme/boot_partition/boot_partition.o 00:02:58.206 CC test/nvme/fused_ordering/fused_ordering.o 00:02:58.206 CC test/nvme/compliance/nvme_compliance.o 00:02:58.206 LINK scheduler 00:02:58.206 CC test/nvme/fdp/fdp.o 00:02:58.206 CC test/nvme/cuse/cuse.o 00:02:58.206 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:58.206 CC test/lvol/esnap/esnap.o 00:02:58.206 LINK vhost 00:02:58.206 LINK spdk_nvme_identify 00:02:58.206 LINK idxd_perf 00:02:58.206 LINK spdk_nvme_perf 00:02:58.464 LINK startup 00:02:58.464 LINK err_injection 00:02:58.464 LINK connect_stress 00:02:58.464 LINK doorbell_aers 00:02:58.464 LINK spdk_top 00:02:58.464 LINK simple_copy 00:02:58.464 CC examples/nvme/hello_world/hello_world.o 00:02:58.464 CC examples/nvme/reconnect/reconnect.o 00:02:58.464 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:58.464 LINK reset 00:02:58.464 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:58.464 LINK boot_partition 00:02:58.464 CC examples/nvme/arbitration/arbitration.o 00:02:58.464 CC examples/nvme/abort/abort.o 00:02:58.464 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:58.464 CC examples/nvme/hotplug/hotplug.o 00:02:58.464 LINK mkfs 00:02:58.464 LINK overhead 00:02:58.464 LINK sgl 00:02:58.464 LINK aer 00:02:58.723 LINK reserve 00:02:58.723 LINK fused_ordering 00:02:58.723 CC examples/accel/perf/accel_perf.o 00:02:58.723 CC examples/blob/hello_world/hello_blob.o 00:02:58.723 LINK nvme_dp 00:02:58.723 CC examples/blob/cli/blobcli.o 00:02:58.723 LINK cmb_copy 00:02:58.723 LINK memory_ut 00:02:58.723 LINK nvme_compliance 00:02:58.981 LINK pmr_persistence 00:02:58.981 LINK hotplug 00:02:58.981 LINK hello_world 00:02:58.981 LINK dif 00:02:58.981 LINK fdp 00:02:58.981 LINK hello_blob 00:02:58.981 LINK arbitration 00:02:58.981 LINK reconnect 00:02:59.239 LINK abort 00:02:59.239 LINK nvme_manage 00:02:59.239 CC test/bdev/bdevio/bdevio.o 00:02:59.239 LINK blobcli 00:02:59.239 LINK accel_perf 00:02:59.804 CC examples/bdev/hello_world/hello_bdev.o 00:02:59.804 CC examples/bdev/bdevperf/bdevperf.o 00:02:59.804 LINK bdevio 00:03:00.061 LINK hello_bdev 00:03:00.061 LINK iscsi_fuzz 00:03:00.061 LINK cuse 00:03:00.624 LINK bdevperf 00:03:01.190 CC examples/nvmf/nvmf/nvmf.o 00:03:01.447 LINK nvmf 00:03:06.740 LINK esnap 00:03:06.740 00:03:06.740 real 1m16.058s 00:03:06.740 user 11m18.263s 00:03:06.740 sys 2m26.378s 00:03:06.740 13:14:40 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:06.740 13:14:40 make -- common/autotest_common.sh@10 -- $ set +x 00:03:06.740 ************************************ 00:03:06.740 END TEST make 00:03:06.740 ************************************ 00:03:06.740 13:14:40 -- common/autotest_common.sh@1142 -- $ return 0 00:03:06.740 13:14:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:06.740 13:14:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:06.740 13:14:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:06.740 13:14:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.740 13:14:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:06.740 13:14:40 -- pm/common@44 -- $ pid=52192 00:03:06.740 13:14:40 -- pm/common@50 -- $ kill -TERM 52192 00:03:06.740 13:14:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.740 13:14:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:06.740 13:14:40 -- pm/common@44 -- $ pid=52194 00:03:06.740 13:14:40 -- pm/common@50 -- $ kill -TERM 52194 00:03:06.741 13:14:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.741 13:14:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:06.741 13:14:40 -- pm/common@44 -- $ pid=52196 00:03:06.741 13:14:40 -- pm/common@50 -- $ kill -TERM 52196 00:03:06.741 13:14:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.741 13:14:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:06.741 13:14:40 -- pm/common@44 -- $ pid=52225 00:03:06.741 13:14:40 -- pm/common@50 -- $ sudo -E kill -TERM 52225 00:03:06.741 13:14:40 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:06.741 13:14:40 -- nvmf/common.sh@7 -- # uname -s 00:03:06.741 13:14:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:06.741 13:14:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:06.741 13:14:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:06.741 13:14:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:06.741 13:14:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:06.741 13:14:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:06.741 13:14:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:06.741 13:14:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:06.741 13:14:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:06.741 13:14:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:06.741 13:14:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:06.741 13:14:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:06.741 13:14:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:06.741 13:14:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:06.741 13:14:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:06.741 13:14:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:06.741 13:14:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:06.741 13:14:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:06.741 13:14:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:06.741 13:14:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:06.741 13:14:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.741 13:14:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.741 13:14:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.741 13:14:40 -- paths/export.sh@5 -- # export PATH 00:03:06.741 13:14:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.741 13:14:40 -- nvmf/common.sh@47 -- # : 0 00:03:06.741 13:14:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:06.741 13:14:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:06.741 13:14:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:06.741 13:14:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:06.741 13:14:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:06.741 13:14:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:06.741 13:14:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:06.741 13:14:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:06.741 13:14:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:06.741 13:14:40 -- spdk/autotest.sh@32 -- # uname -s 00:03:06.741 13:14:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:06.741 13:14:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:06.741 13:14:40 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:06.741 13:14:40 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:06.741 13:14:40 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:06.741 13:14:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:06.741 13:14:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:06.741 13:14:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:06.741 13:14:40 -- spdk/autotest.sh@48 -- # udevadm_pid=110496 00:03:06.741 13:14:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:06.741 13:14:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:06.741 13:14:40 -- pm/common@17 -- # local monitor 00:03:06.741 13:14:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.741 13:14:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.741 13:14:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.741 13:14:40 -- pm/common@21 -- # date +%s 00:03:06.741 13:14:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.741 13:14:40 -- pm/common@21 -- # date +%s 00:03:06.741 13:14:40 -- pm/common@25 -- # sleep 1 00:03:06.741 13:14:40 -- pm/common@21 -- # date +%s 00:03:06.741 13:14:40 -- pm/common@21 -- # date +%s 00:03:06.741 13:14:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720869280 00:03:06.741 13:14:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720869280 00:03:06.741 13:14:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720869280 00:03:06.741 13:14:40 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720869280 00:03:06.741 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720869280_collect-vmstat.pm.log 00:03:06.741 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720869280_collect-cpu-load.pm.log 00:03:06.741 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720869280_collect-cpu-temp.pm.log 00:03:06.741 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720869280_collect-bmc-pm.bmc.pm.log 00:03:07.309 13:14:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:07.309 13:14:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:07.309 13:14:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:07.309 13:14:41 -- common/autotest_common.sh@10 -- # set +x 00:03:07.309 13:14:41 -- spdk/autotest.sh@59 -- # create_test_list 00:03:07.309 13:14:41 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:07.309 13:14:41 -- common/autotest_common.sh@10 -- # set +x 00:03:07.309 13:14:41 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:07.309 13:14:41 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.309 13:14:41 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.309 13:14:41 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:07.309 13:14:41 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.309 13:14:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:07.309 13:14:41 -- common/autotest_common.sh@1455 -- # uname 00:03:07.309 13:14:41 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:07.309 13:14:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:07.309 13:14:41 -- common/autotest_common.sh@1475 -- # uname 00:03:07.309 13:14:41 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:07.309 13:14:41 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:07.309 13:14:41 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:07.309 13:14:41 -- spdk/autotest.sh@72 -- # hash lcov 00:03:07.309 13:14:41 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:07.309 13:14:41 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:07.309 --rc lcov_branch_coverage=1 00:03:07.309 --rc lcov_function_coverage=1 00:03:07.309 --rc genhtml_branch_coverage=1 00:03:07.309 --rc genhtml_function_coverage=1 00:03:07.309 --rc genhtml_legend=1 00:03:07.309 --rc geninfo_all_blocks=1 00:03:07.309 ' 00:03:07.309 13:14:41 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:07.309 --rc lcov_branch_coverage=1 00:03:07.309 --rc lcov_function_coverage=1 00:03:07.309 --rc genhtml_branch_coverage=1 00:03:07.309 --rc genhtml_function_coverage=1 00:03:07.309 --rc genhtml_legend=1 00:03:07.309 --rc geninfo_all_blocks=1 00:03:07.309 ' 00:03:07.309 13:14:41 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:07.309 --rc lcov_branch_coverage=1 00:03:07.309 --rc lcov_function_coverage=1 00:03:07.309 --rc genhtml_branch_coverage=1 00:03:07.309 --rc genhtml_function_coverage=1 00:03:07.309 --rc genhtml_legend=1 00:03:07.309 --rc geninfo_all_blocks=1 00:03:07.309 --no-external' 00:03:07.309 13:14:41 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:07.309 --rc lcov_branch_coverage=1 00:03:07.309 --rc lcov_function_coverage=1 00:03:07.309 --rc genhtml_branch_coverage=1 00:03:07.309 --rc genhtml_function_coverage=1 00:03:07.309 --rc genhtml_legend=1 00:03:07.309 --rc geninfo_all_blocks=1 00:03:07.309 --no-external' 00:03:07.309 13:14:41 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:07.567 lcov: LCOV version 1.14 00:03:07.568 13:14:42 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:12.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:12.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:13.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:13.097 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:13.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:13.097 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:13.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:13.097 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:13.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:13.097 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:13.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:13.097 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:13.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:13.097 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:13.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:13.097 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:13.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:13.097 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:13.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:13.097 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:13.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:13.097 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:13.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:13.097 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:13.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:13.097 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:13.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:13.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:13.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:35.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:35.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:40.339 13:15:15 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:40.339 13:15:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:40.339 13:15:15 -- common/autotest_common.sh@10 -- # set +x 00:03:40.339 13:15:15 -- spdk/autotest.sh@91 -- # rm -f 00:03:40.339 13:15:15 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.712 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:41.712 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:41.712 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:41.712 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:41.712 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:41.712 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:41.712 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:41.712 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:41.712 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:41.712 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:41.712 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:41.712 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:41.713 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:41.713 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:41.713 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:41.713 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:41.713 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:41.713 13:15:16 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:41.713 13:15:16 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:41.713 13:15:16 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:41.713 13:15:16 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:41.713 13:15:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.713 13:15:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:41.713 13:15:16 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:41.713 13:15:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.713 13:15:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.713 13:15:16 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:41.713 13:15:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.713 13:15:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:41.713 13:15:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:41.713 13:15:16 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:41.713 13:15:16 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:41.713 No valid GPT data, bailing 00:03:41.713 13:15:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:41.713 13:15:16 -- scripts/common.sh@391 -- # pt= 00:03:41.713 13:15:16 -- scripts/common.sh@392 -- # return 1 00:03:41.713 13:15:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:41.713 1+0 records in 00:03:41.713 1+0 records out 00:03:41.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00166242 s, 631 MB/s 00:03:41.713 13:15:16 -- spdk/autotest.sh@118 -- # sync 00:03:41.713 13:15:16 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:41.713 13:15:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:41.713 13:15:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:43.638 13:15:18 -- spdk/autotest.sh@124 -- # uname -s 00:03:43.638 13:15:18 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:43.638 13:15:18 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:43.638 13:15:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.638 13:15:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.638 13:15:18 -- common/autotest_common.sh@10 -- # set +x 00:03:43.638 ************************************ 00:03:43.638 START TEST setup.sh 00:03:43.638 ************************************ 00:03:43.638 13:15:18 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:43.638 * Looking for test storage... 00:03:43.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:43.638 13:15:18 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:43.638 13:15:18 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:43.638 13:15:18 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:43.638 13:15:18 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.638 13:15:18 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.638 13:15:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:43.638 ************************************ 00:03:43.638 START TEST acl 00:03:43.638 ************************************ 00:03:43.638 13:15:18 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:43.638 * Looking for test storage... 00:03:43.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:43.638 13:15:18 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:43.638 13:15:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:43.638 13:15:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:43.638 13:15:18 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:43.638 13:15:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:43.638 13:15:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:43.638 13:15:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:43.638 13:15:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:43.638 13:15:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:43.638 13:15:18 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:43.638 13:15:18 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:43.638 13:15:18 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:43.638 13:15:18 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:43.638 13:15:18 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:43.639 13:15:18 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.639 13:15:18 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.013 13:15:19 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:45.013 13:15:19 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:45.013 13:15:19 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:45.013 13:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.013 13:15:19 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.013 13:15:19 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:46.391 Hugepages 00:03:46.391 node hugesize free / total 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 00:03:46.391 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:46.391 13:15:20 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:46.391 13:15:20 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.391 13:15:20 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.391 13:15:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:46.391 ************************************ 00:03:46.391 START TEST denied 00:03:46.391 ************************************ 00:03:46.391 13:15:20 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:46.391 13:15:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:46.391 13:15:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:46.391 13:15:20 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:46.391 13:15:20 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.391 13:15:20 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:47.763 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:47.763 13:15:22 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:47.763 13:15:22 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:47.763 13:15:22 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:47.763 13:15:22 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:47.763 13:15:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:47.763 13:15:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:47.763 13:15:22 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:47.763 13:15:22 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:47.763 13:15:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.763 13:15:22 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.291 00:03:50.291 real 0m3.816s 00:03:50.291 user 0m1.070s 00:03:50.291 sys 0m1.824s 00:03:50.291 13:15:24 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.291 13:15:24 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:50.291 ************************************ 00:03:50.291 END TEST denied 00:03:50.291 ************************************ 00:03:50.291 13:15:24 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:50.291 13:15:24 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:50.291 13:15:24 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.291 13:15:24 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.291 13:15:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:50.291 ************************************ 00:03:50.291 START TEST allowed 00:03:50.291 ************************************ 00:03:50.291 13:15:24 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:50.291 13:15:24 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:50.291 13:15:24 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:50.291 13:15:24 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:50.291 13:15:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.291 13:15:24 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:52.843 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:52.843 13:15:27 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:52.843 13:15:27 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:52.843 13:15:27 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:52.843 13:15:27 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.843 13:15:27 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.218 00:03:54.218 real 0m3.916s 00:03:54.218 user 0m1.051s 00:03:54.218 sys 0m1.675s 00:03:54.218 13:15:28 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.218 13:15:28 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:54.218 ************************************ 00:03:54.218 END TEST allowed 00:03:54.218 ************************************ 00:03:54.218 13:15:28 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:54.218 00:03:54.218 real 0m10.436s 00:03:54.218 user 0m3.203s 00:03:54.218 sys 0m5.189s 00:03:54.218 13:15:28 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.218 13:15:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:54.218 ************************************ 00:03:54.218 END TEST acl 00:03:54.218 ************************************ 00:03:54.218 13:15:28 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:54.218 13:15:28 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:54.218 13:15:28 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.218 13:15:28 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.218 13:15:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:54.218 ************************************ 00:03:54.218 START TEST hugepages 00:03:54.218 ************************************ 00:03:54.218 13:15:28 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:54.218 * Looking for test storage... 00:03:54.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:54.218 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:54.218 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:54.218 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:54.218 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:54.218 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:54.218 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:54.218 13:15:28 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43450224 kB' 'MemAvailable: 46955236 kB' 'Buffers: 2704 kB' 'Cached: 10505448 kB' 'SwapCached: 0 kB' 'Active: 7503948 kB' 'Inactive: 3506552 kB' 'Active(anon): 7109596 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505652 kB' 'Mapped: 200448 kB' 'Shmem: 6607248 kB' 'KReclaimable: 195460 kB' 'Slab: 564752 kB' 'SReclaimable: 195460 kB' 'SUnreclaim: 369292 kB' 'KernelStack: 12848 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 8231488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.219 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:54.220 13:15:28 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:54.220 13:15:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.220 13:15:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.220 13:15:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.220 ************************************ 00:03:54.220 START TEST default_setup 00:03:54.220 ************************************ 00:03:54.220 13:15:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:54.220 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:54.220 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:54.220 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:54.220 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:54.220 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:54.220 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:54.220 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.220 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:54.220 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:54.220 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:54.220 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.220 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:54.220 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.220 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.220 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.221 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:54.221 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:54.221 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:54.221 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:54.221 13:15:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:54.221 13:15:28 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.221 13:15:28 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.594 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:55.594 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:55.594 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:55.594 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:55.594 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:55.594 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:55.594 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:55.594 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:55.594 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:55.594 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:55.594 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:55.594 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:55.594 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:55.594 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:55.594 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:55.594 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:56.531 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45569864 kB' 'MemAvailable: 49074804 kB' 'Buffers: 2704 kB' 'Cached: 10505712 kB' 'SwapCached: 0 kB' 'Active: 7524056 kB' 'Inactive: 3506552 kB' 'Active(anon): 7129704 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525516 kB' 'Mapped: 200608 kB' 'Shmem: 6607512 kB' 'KReclaimable: 195316 kB' 'Slab: 563856 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368540 kB' 'KernelStack: 12768 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8252416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.531 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45569052 kB' 'MemAvailable: 49073992 kB' 'Buffers: 2704 kB' 'Cached: 10505716 kB' 'SwapCached: 0 kB' 'Active: 7522712 kB' 'Inactive: 3506552 kB' 'Active(anon): 7128360 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524192 kB' 'Mapped: 200552 kB' 'Shmem: 6607516 kB' 'KReclaimable: 195316 kB' 'Slab: 563840 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368524 kB' 'KernelStack: 12736 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8252436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.532 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.533 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.796 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45569484 kB' 'MemAvailable: 49074424 kB' 'Buffers: 2704 kB' 'Cached: 10505732 kB' 'SwapCached: 0 kB' 'Active: 7522552 kB' 'Inactive: 3506552 kB' 'Active(anon): 7128200 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524056 kB' 'Mapped: 200476 kB' 'Shmem: 6607532 kB' 'KReclaimable: 195316 kB' 'Slab: 563832 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368516 kB' 'KernelStack: 12816 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8252456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.797 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.798 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.799 nr_hugepages=1024 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.799 resv_hugepages=0 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.799 surplus_hugepages=0 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.799 anon_hugepages=0 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45569484 kB' 'MemAvailable: 49074424 kB' 'Buffers: 2704 kB' 'Cached: 10505752 kB' 'SwapCached: 0 kB' 'Active: 7522624 kB' 'Inactive: 3506552 kB' 'Active(anon): 7128272 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524060 kB' 'Mapped: 200476 kB' 'Shmem: 6607552 kB' 'KReclaimable: 195316 kB' 'Slab: 563832 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368516 kB' 'KernelStack: 12816 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8252476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.799 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.800 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:56.801 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26989896 kB' 'MemUsed: 5839988 kB' 'SwapCached: 0 kB' 'Active: 2671792 kB' 'Inactive: 109764 kB' 'Active(anon): 2560904 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2517048 kB' 'Mapped: 46028 kB' 'AnonPages: 267628 kB' 'Shmem: 2296396 kB' 'KernelStack: 7256 kB' 'PageTables: 4960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96496 kB' 'Slab: 311792 kB' 'SReclaimable: 96496 kB' 'SUnreclaim: 215296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.802 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.803 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:56.804 node0=1024 expecting 1024 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:56.804 00:03:56.804 real 0m2.473s 00:03:56.804 user 0m0.676s 00:03:56.804 sys 0m0.934s 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.804 13:15:31 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:56.804 ************************************ 00:03:56.804 END TEST default_setup 00:03:56.804 ************************************ 00:03:56.804 13:15:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:56.804 13:15:31 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:56.804 13:15:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.804 13:15:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.804 13:15:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.804 ************************************ 00:03:56.804 START TEST per_node_1G_alloc 00:03:56.804 ************************************ 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.804 13:15:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.740 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:57.740 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:57.740 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:57.740 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:57.740 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:57.740 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:57.740 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:58.002 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:58.002 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:58.002 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:58.002 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:58.002 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:58.002 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:58.002 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:58.002 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:58.002 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:58.002 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45556716 kB' 'MemAvailable: 49061656 kB' 'Buffers: 2704 kB' 'Cached: 10505828 kB' 'SwapCached: 0 kB' 'Active: 7522824 kB' 'Inactive: 3506552 kB' 'Active(anon): 7128472 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524180 kB' 'Mapped: 200568 kB' 'Shmem: 6607628 kB' 'KReclaimable: 195316 kB' 'Slab: 563928 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368612 kB' 'KernelStack: 12832 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8252532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.002 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.003 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45558996 kB' 'MemAvailable: 49063936 kB' 'Buffers: 2704 kB' 'Cached: 10505832 kB' 'SwapCached: 0 kB' 'Active: 7523172 kB' 'Inactive: 3506552 kB' 'Active(anon): 7128820 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524548 kB' 'Mapped: 200568 kB' 'Shmem: 6607632 kB' 'KReclaimable: 195316 kB' 'Slab: 563928 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368612 kB' 'KernelStack: 12832 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8252552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45559136 kB' 'MemAvailable: 49064076 kB' 'Buffers: 2704 kB' 'Cached: 10505848 kB' 'SwapCached: 0 kB' 'Active: 7522924 kB' 'Inactive: 3506552 kB' 'Active(anon): 7128572 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524180 kB' 'Mapped: 200488 kB' 'Shmem: 6607648 kB' 'KReclaimable: 195316 kB' 'Slab: 563904 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368588 kB' 'KernelStack: 12848 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8252572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.007 nr_hugepages=1024 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.007 resv_hugepages=0 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.007 surplus_hugepages=0 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.007 anon_hugepages=0 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.007 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45559520 kB' 'MemAvailable: 49064460 kB' 'Buffers: 2704 kB' 'Cached: 10505872 kB' 'SwapCached: 0 kB' 'Active: 7523004 kB' 'Inactive: 3506552 kB' 'Active(anon): 7128652 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524180 kB' 'Mapped: 200488 kB' 'Shmem: 6607672 kB' 'KReclaimable: 195316 kB' 'Slab: 563904 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368588 kB' 'KernelStack: 12848 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8252596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28037476 kB' 'MemUsed: 4792408 kB' 'SwapCached: 0 kB' 'Active: 2672004 kB' 'Inactive: 109764 kB' 'Active(anon): 2561116 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2517056 kB' 'Mapped: 45604 kB' 'AnonPages: 267880 kB' 'Shmem: 2296404 kB' 'KernelStack: 7304 kB' 'PageTables: 5048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96496 kB' 'Slab: 311788 kB' 'SReclaimable: 96496 kB' 'SUnreclaim: 215292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.277 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17525976 kB' 'MemUsed: 10185848 kB' 'SwapCached: 0 kB' 'Active: 4850972 kB' 'Inactive: 3396788 kB' 'Active(anon): 4567508 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7991564 kB' 'Mapped: 154884 kB' 'AnonPages: 256304 kB' 'Shmem: 4311312 kB' 'KernelStack: 5544 kB' 'PageTables: 3172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98820 kB' 'Slab: 252116 kB' 'SReclaimable: 98820 kB' 'SUnreclaim: 153296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.278 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.279 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:58.280 node0=512 expecting 512 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:58.280 node1=512 expecting 512 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:58.280 00:03:58.280 real 0m1.414s 00:03:58.280 user 0m0.587s 00:03:58.280 sys 0m0.788s 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.280 13:15:32 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:58.280 ************************************ 00:03:58.280 END TEST per_node_1G_alloc 00:03:58.280 ************************************ 00:03:58.280 13:15:32 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:58.280 13:15:32 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:58.280 13:15:32 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.280 13:15:32 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.280 13:15:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.280 ************************************ 00:03:58.280 START TEST even_2G_alloc 00:03:58.280 ************************************ 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:58.280 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:58.281 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.281 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:58.281 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:58.281 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:58.281 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.281 13:15:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.214 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.214 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.214 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.214 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.214 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.214 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.214 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.214 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.214 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:59.214 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.214 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.214 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.214 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.214 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.214 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.214 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.214 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45566132 kB' 'MemAvailable: 49071072 kB' 'Buffers: 2704 kB' 'Cached: 10505968 kB' 'SwapCached: 0 kB' 'Active: 7523372 kB' 'Inactive: 3506552 kB' 'Active(anon): 7129020 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524380 kB' 'Mapped: 200512 kB' 'Shmem: 6607768 kB' 'KReclaimable: 195316 kB' 'Slab: 563852 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368536 kB' 'KernelStack: 12816 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8252800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.478 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.479 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45566656 kB' 'MemAvailable: 49071596 kB' 'Buffers: 2704 kB' 'Cached: 10505972 kB' 'SwapCached: 0 kB' 'Active: 7523284 kB' 'Inactive: 3506552 kB' 'Active(anon): 7128932 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524300 kB' 'Mapped: 200512 kB' 'Shmem: 6607772 kB' 'KReclaimable: 195316 kB' 'Slab: 563828 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368512 kB' 'KernelStack: 12848 kB' 'PageTables: 8212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8252820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.480 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.481 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45565900 kB' 'MemAvailable: 49070840 kB' 'Buffers: 2704 kB' 'Cached: 10505984 kB' 'SwapCached: 0 kB' 'Active: 7523224 kB' 'Inactive: 3506552 kB' 'Active(anon): 7128872 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524208 kB' 'Mapped: 200512 kB' 'Shmem: 6607784 kB' 'KReclaimable: 195316 kB' 'Slab: 563892 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368576 kB' 'KernelStack: 12816 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8252840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.482 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.483 nr_hugepages=1024 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.483 resv_hugepages=0 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.483 surplus_hugepages=0 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.483 anon_hugepages=0 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.483 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45565144 kB' 'MemAvailable: 49070084 kB' 'Buffers: 2704 kB' 'Cached: 10506008 kB' 'SwapCached: 0 kB' 'Active: 7523156 kB' 'Inactive: 3506552 kB' 'Active(anon): 7128804 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524168 kB' 'Mapped: 200512 kB' 'Shmem: 6607808 kB' 'KReclaimable: 195316 kB' 'Slab: 563892 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368576 kB' 'KernelStack: 12800 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8252864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.484 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.485 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28050072 kB' 'MemUsed: 4779812 kB' 'SwapCached: 0 kB' 'Active: 2672660 kB' 'Inactive: 109764 kB' 'Active(anon): 2561772 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2517064 kB' 'Mapped: 45624 kB' 'AnonPages: 268480 kB' 'Shmem: 2296412 kB' 'KernelStack: 7320 kB' 'PageTables: 5096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96496 kB' 'Slab: 311788 kB' 'SReclaimable: 96496 kB' 'SUnreclaim: 215292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.745 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.746 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17516332 kB' 'MemUsed: 10195492 kB' 'SwapCached: 0 kB' 'Active: 4850616 kB' 'Inactive: 3396788 kB' 'Active(anon): 4567152 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7991688 kB' 'Mapped: 154888 kB' 'AnonPages: 255756 kB' 'Shmem: 4311436 kB' 'KernelStack: 5512 kB' 'PageTables: 3080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98820 kB' 'Slab: 252104 kB' 'SReclaimable: 98820 kB' 'SUnreclaim: 153284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.747 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:59.748 node0=512 expecting 512 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:59.748 node1=512 expecting 512 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:59.748 00:03:59.748 real 0m1.405s 00:03:59.748 user 0m0.561s 00:03:59.748 sys 0m0.806s 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.748 13:15:34 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.748 ************************************ 00:03:59.748 END TEST even_2G_alloc 00:03:59.748 ************************************ 00:03:59.748 13:15:34 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:59.748 13:15:34 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:59.748 13:15:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.748 13:15:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.748 13:15:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.748 ************************************ 00:03:59.748 START TEST odd_alloc 00:03:59.748 ************************************ 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.748 13:15:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.682 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:00.682 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:00.682 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:00.682 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:00.682 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:00.682 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:00.682 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:00.682 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:00.682 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:00.682 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:00.682 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:00.682 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:00.682 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:00.943 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:00.943 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:00.943 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:00.943 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45559620 kB' 'MemAvailable: 49064560 kB' 'Buffers: 2704 kB' 'Cached: 10506100 kB' 'SwapCached: 0 kB' 'Active: 7520836 kB' 'Inactive: 3506552 kB' 'Active(anon): 7126484 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521896 kB' 'Mapped: 199780 kB' 'Shmem: 6607900 kB' 'KReclaimable: 195316 kB' 'Slab: 563820 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368504 kB' 'KernelStack: 12832 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 8240144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.943 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45559996 kB' 'MemAvailable: 49064936 kB' 'Buffers: 2704 kB' 'Cached: 10506104 kB' 'SwapCached: 0 kB' 'Active: 7520640 kB' 'Inactive: 3506552 kB' 'Active(anon): 7126288 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521644 kB' 'Mapped: 199776 kB' 'Shmem: 6607904 kB' 'KReclaimable: 195316 kB' 'Slab: 563824 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368508 kB' 'KernelStack: 12912 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 8240164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.944 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.945 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45560960 kB' 'MemAvailable: 49065900 kB' 'Buffers: 2704 kB' 'Cached: 10506136 kB' 'SwapCached: 0 kB' 'Active: 7520988 kB' 'Inactive: 3506552 kB' 'Active(anon): 7126636 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521992 kB' 'Mapped: 199656 kB' 'Shmem: 6607936 kB' 'KReclaimable: 195316 kB' 'Slab: 563876 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368560 kB' 'KernelStack: 12928 kB' 'PageTables: 9024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 8241548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.947 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:00.948 nr_hugepages=1025 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.948 resv_hugepages=0 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.948 surplus_hugepages=0 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.948 anon_hugepages=0 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45560692 kB' 'MemAvailable: 49065632 kB' 'Buffers: 2704 kB' 'Cached: 10506144 kB' 'SwapCached: 0 kB' 'Active: 7521600 kB' 'Inactive: 3506552 kB' 'Active(anon): 7127248 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522596 kB' 'Mapped: 199656 kB' 'Shmem: 6607944 kB' 'KReclaimable: 195316 kB' 'Slab: 563876 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368560 kB' 'KernelStack: 13056 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 8241568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.209 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.210 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28050500 kB' 'MemUsed: 4779384 kB' 'SwapCached: 0 kB' 'Active: 2670656 kB' 'Inactive: 109764 kB' 'Active(anon): 2559768 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2517076 kB' 'Mapped: 44908 kB' 'AnonPages: 266492 kB' 'Shmem: 2296424 kB' 'KernelStack: 7192 kB' 'PageTables: 4604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96496 kB' 'Slab: 311796 kB' 'SReclaimable: 96496 kB' 'SUnreclaim: 215300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.211 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17514856 kB' 'MemUsed: 10196968 kB' 'SwapCached: 0 kB' 'Active: 4849680 kB' 'Inactive: 3396788 kB' 'Active(anon): 4566216 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7991792 kB' 'Mapped: 154748 kB' 'AnonPages: 254348 kB' 'Shmem: 4311540 kB' 'KernelStack: 5592 kB' 'PageTables: 3244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98820 kB' 'Slab: 252080 kB' 'SReclaimable: 98820 kB' 'SUnreclaim: 153260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.212 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.213 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:01.214 node0=512 expecting 513 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:01.214 node1=513 expecting 512 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:01.214 00:04:01.214 real 0m1.432s 00:04:01.214 user 0m0.572s 00:04:01.214 sys 0m0.813s 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.214 13:15:35 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.214 ************************************ 00:04:01.214 END TEST odd_alloc 00:04:01.214 ************************************ 00:04:01.214 13:15:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:01.214 13:15:35 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:01.214 13:15:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.214 13:15:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.214 13:15:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.214 ************************************ 00:04:01.214 START TEST custom_alloc 00:04:01.214 ************************************ 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.214 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.215 13:15:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.152 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:02.152 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:02.152 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:02.152 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:02.152 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:02.152 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:02.152 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:02.152 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:02.152 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:02.152 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:02.152 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:02.152 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:02.152 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:02.152 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:02.152 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:02.152 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:02.152 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44516500 kB' 'MemAvailable: 48021440 kB' 'Buffers: 2704 kB' 'Cached: 10506236 kB' 'SwapCached: 0 kB' 'Active: 7520020 kB' 'Inactive: 3506552 kB' 'Active(anon): 7125668 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520828 kB' 'Mapped: 199668 kB' 'Shmem: 6608036 kB' 'KReclaimable: 195316 kB' 'Slab: 563912 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368596 kB' 'KernelStack: 12800 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 8239568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.416 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44517084 kB' 'MemAvailable: 48022024 kB' 'Buffers: 2704 kB' 'Cached: 10506240 kB' 'SwapCached: 0 kB' 'Active: 7520368 kB' 'Inactive: 3506552 kB' 'Active(anon): 7126016 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521168 kB' 'Mapped: 199668 kB' 'Shmem: 6608040 kB' 'KReclaimable: 195316 kB' 'Slab: 563992 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368676 kB' 'KernelStack: 12832 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 8239588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.417 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.418 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44516580 kB' 'MemAvailable: 48021520 kB' 'Buffers: 2704 kB' 'Cached: 10506256 kB' 'SwapCached: 0 kB' 'Active: 7521212 kB' 'Inactive: 3506552 kB' 'Active(anon): 7126860 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522036 kB' 'Mapped: 200104 kB' 'Shmem: 6608056 kB' 'KReclaimable: 195316 kB' 'Slab: 563992 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368676 kB' 'KernelStack: 12832 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 8241096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.419 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.420 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:02.421 nr_hugepages=1536 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.421 resv_hugepages=0 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.421 surplus_hugepages=0 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.421 anon_hugepages=0 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44517180 kB' 'MemAvailable: 48022120 kB' 'Buffers: 2704 kB' 'Cached: 10506276 kB' 'SwapCached: 0 kB' 'Active: 7524604 kB' 'Inactive: 3506552 kB' 'Active(anon): 7130252 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525336 kB' 'Mapped: 200104 kB' 'Shmem: 6608076 kB' 'KReclaimable: 195316 kB' 'Slab: 563992 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368676 kB' 'KernelStack: 12784 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 8244812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.421 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.422 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28038668 kB' 'MemUsed: 4791216 kB' 'SwapCached: 0 kB' 'Active: 2676428 kB' 'Inactive: 109764 kB' 'Active(anon): 2565540 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2517076 kB' 'Mapped: 45584 kB' 'AnonPages: 272256 kB' 'Shmem: 2296424 kB' 'KernelStack: 7272 kB' 'PageTables: 4792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96496 kB' 'Slab: 311924 kB' 'SReclaimable: 96496 kB' 'SUnreclaim: 215428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.423 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16479332 kB' 'MemUsed: 11232492 kB' 'SwapCached: 0 kB' 'Active: 4849636 kB' 'Inactive: 3396788 kB' 'Active(anon): 4566172 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7991948 kB' 'Mapped: 154904 kB' 'AnonPages: 254556 kB' 'Shmem: 4311696 kB' 'KernelStack: 5560 kB' 'PageTables: 3140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98820 kB' 'Slab: 252068 kB' 'SReclaimable: 98820 kB' 'SUnreclaim: 153248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.424 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.425 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:02.684 node0=512 expecting 512 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:02.684 node1=1024 expecting 1024 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:02.684 00:04:02.684 real 0m1.370s 00:04:02.684 user 0m0.573s 00:04:02.684 sys 0m0.756s 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.684 13:15:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.684 ************************************ 00:04:02.684 END TEST custom_alloc 00:04:02.684 ************************************ 00:04:02.684 13:15:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:02.684 13:15:37 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:02.684 13:15:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.684 13:15:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.684 13:15:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.684 ************************************ 00:04:02.684 START TEST no_shrink_alloc 00:04:02.684 ************************************ 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.684 13:15:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.617 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:03.617 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:03.617 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:03.617 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:03.617 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:03.617 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:03.617 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:03.617 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:03.617 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:03.617 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:03.617 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:03.617 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:03.617 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:03.617 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:03.617 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:03.617 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:03.617 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45447420 kB' 'MemAvailable: 48952360 kB' 'Buffers: 2704 kB' 'Cached: 10506356 kB' 'SwapCached: 0 kB' 'Active: 7520312 kB' 'Inactive: 3506552 kB' 'Active(anon): 7125960 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520948 kB' 'Mapped: 199744 kB' 'Shmem: 6608156 kB' 'KReclaimable: 195316 kB' 'Slab: 563960 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368644 kB' 'KernelStack: 12816 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8239692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.903 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.904 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45454316 kB' 'MemAvailable: 48959256 kB' 'Buffers: 2704 kB' 'Cached: 10506360 kB' 'SwapCached: 0 kB' 'Active: 7520892 kB' 'Inactive: 3506552 kB' 'Active(anon): 7126540 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521604 kB' 'Mapped: 199684 kB' 'Shmem: 6608160 kB' 'KReclaimable: 195316 kB' 'Slab: 563928 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368612 kB' 'KernelStack: 12880 kB' 'PageTables: 7932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8239708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.905 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.906 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45455196 kB' 'MemAvailable: 48960136 kB' 'Buffers: 2704 kB' 'Cached: 10506380 kB' 'SwapCached: 0 kB' 'Active: 7520580 kB' 'Inactive: 3506552 kB' 'Active(anon): 7126228 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521268 kB' 'Mapped: 199684 kB' 'Shmem: 6608180 kB' 'KReclaimable: 195316 kB' 'Slab: 563968 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368652 kB' 'KernelStack: 12864 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8239732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.907 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.908 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.909 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.910 nr_hugepages=1024 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.910 resv_hugepages=0 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.910 surplus_hugepages=0 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.910 anon_hugepages=0 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45455196 kB' 'MemAvailable: 48960136 kB' 'Buffers: 2704 kB' 'Cached: 10506380 kB' 'SwapCached: 0 kB' 'Active: 7520320 kB' 'Inactive: 3506552 kB' 'Active(anon): 7125968 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521000 kB' 'Mapped: 199684 kB' 'Shmem: 6608180 kB' 'KReclaimable: 195316 kB' 'Slab: 563968 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368652 kB' 'KernelStack: 12864 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8239752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.910 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.911 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26999732 kB' 'MemUsed: 5830152 kB' 'SwapCached: 0 kB' 'Active: 2670800 kB' 'Inactive: 109764 kB' 'Active(anon): 2559912 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2517080 kB' 'Mapped: 44932 kB' 'AnonPages: 266604 kB' 'Shmem: 2296428 kB' 'KernelStack: 7320 kB' 'PageTables: 4764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96496 kB' 'Slab: 311928 kB' 'SReclaimable: 96496 kB' 'SUnreclaim: 215432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.912 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.913 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.914 node0=1024 expecting 1024 00:04:03.914 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.914 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:03.914 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:03.914 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:03.914 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.914 13:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.300 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:05.300 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.300 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:05.300 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:05.300 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:05.300 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:05.300 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:05.300 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:05.300 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:05.300 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:05.300 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:05.300 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:05.300 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:05.300 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:05.300 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:05.300 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:05.300 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:05.300 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45473892 kB' 'MemAvailable: 48978832 kB' 'Buffers: 2704 kB' 'Cached: 10506476 kB' 'SwapCached: 0 kB' 'Active: 7521236 kB' 'Inactive: 3506552 kB' 'Active(anon): 7126884 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521752 kB' 'Mapped: 199760 kB' 'Shmem: 6608276 kB' 'KReclaimable: 195316 kB' 'Slab: 563884 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368568 kB' 'KernelStack: 12864 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8239940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.300 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.301 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45474612 kB' 'MemAvailable: 48979552 kB' 'Buffers: 2704 kB' 'Cached: 10506480 kB' 'SwapCached: 0 kB' 'Active: 7521056 kB' 'Inactive: 3506552 kB' 'Active(anon): 7126704 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521656 kB' 'Mapped: 199688 kB' 'Shmem: 6608280 kB' 'KReclaimable: 195316 kB' 'Slab: 563884 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368568 kB' 'KernelStack: 12864 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8239956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.302 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.303 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.304 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45474612 kB' 'MemAvailable: 48979552 kB' 'Buffers: 2704 kB' 'Cached: 10506500 kB' 'SwapCached: 0 kB' 'Active: 7520872 kB' 'Inactive: 3506552 kB' 'Active(anon): 7126520 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521416 kB' 'Mapped: 199688 kB' 'Shmem: 6608300 kB' 'KReclaimable: 195316 kB' 'Slab: 564004 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368688 kB' 'KernelStack: 12896 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8239980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.305 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.306 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.307 nr_hugepages=1024 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.307 resv_hugepages=0 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.307 surplus_hugepages=0 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.307 anon_hugepages=0 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45474360 kB' 'MemAvailable: 48979300 kB' 'Buffers: 2704 kB' 'Cached: 10506520 kB' 'SwapCached: 0 kB' 'Active: 7520920 kB' 'Inactive: 3506552 kB' 'Active(anon): 7126568 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521416 kB' 'Mapped: 199688 kB' 'Shmem: 6608320 kB' 'KReclaimable: 195316 kB' 'Slab: 564004 kB' 'SReclaimable: 195316 kB' 'SUnreclaim: 368688 kB' 'KernelStack: 12896 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8240000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1875548 kB' 'DirectMap2M: 15869952 kB' 'DirectMap1G: 51380224 kB' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.307 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.308 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.309 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27000364 kB' 'MemUsed: 5829520 kB' 'SwapCached: 0 kB' 'Active: 2670396 kB' 'Inactive: 109764 kB' 'Active(anon): 2559508 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2517080 kB' 'Mapped: 44936 kB' 'AnonPages: 266156 kB' 'Shmem: 2296428 kB' 'KernelStack: 7352 kB' 'PageTables: 4724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96496 kB' 'Slab: 311844 kB' 'SReclaimable: 96496 kB' 'SUnreclaim: 215348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.310 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.311 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.312 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.312 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.312 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.312 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.312 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.312 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.312 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.312 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.312 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.312 node0=1024 expecting 1024 00:04:05.312 13:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.312 00:04:05.312 real 0m2.782s 00:04:05.312 user 0m1.146s 00:04:05.312 sys 0m1.557s 00:04:05.312 13:15:39 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.312 13:15:39 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:05.312 ************************************ 00:04:05.312 END TEST no_shrink_alloc 00:04:05.312 ************************************ 00:04:05.312 13:15:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:05.312 13:15:40 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:05.312 13:15:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:05.312 13:15:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.312 13:15:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.312 13:15:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.312 13:15:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.312 13:15:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.312 13:15:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.312 13:15:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.312 13:15:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.312 13:15:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.312 13:15:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.312 13:15:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:05.312 13:15:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:05.312 00:04:05.312 real 0m11.260s 00:04:05.312 user 0m4.281s 00:04:05.312 sys 0m5.895s 00:04:05.312 13:15:40 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.312 13:15:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.312 ************************************ 00:04:05.312 END TEST hugepages 00:04:05.312 ************************************ 00:04:05.570 13:15:40 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:05.570 13:15:40 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:05.570 13:15:40 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.570 13:15:40 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.570 13:15:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.570 ************************************ 00:04:05.570 START TEST driver 00:04:05.570 ************************************ 00:04:05.570 13:15:40 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:05.570 * Looking for test storage... 00:04:05.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:05.570 13:15:40 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:05.570 13:15:40 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.570 13:15:40 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:08.103 13:15:42 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:08.103 13:15:42 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.103 13:15:42 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.103 13:15:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:08.103 ************************************ 00:04:08.103 START TEST guess_driver 00:04:08.103 ************************************ 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:08.103 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:08.103 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:08.103 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:08.103 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:08.103 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:08.103 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:08.103 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:08.103 Looking for driver=vfio-pci 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.103 13:15:42 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.047 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.047 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.047 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.306 13:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.241 13:15:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:10.241 13:15:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:10.242 13:15:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.242 13:15:44 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:10.242 13:15:44 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:10.242 13:15:44 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.242 13:15:44 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.770 00:04:12.770 real 0m4.689s 00:04:12.770 user 0m1.058s 00:04:12.770 sys 0m1.782s 00:04:12.770 13:15:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.770 13:15:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:12.770 ************************************ 00:04:12.770 END TEST guess_driver 00:04:12.770 ************************************ 00:04:12.770 13:15:47 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:12.770 00:04:12.770 real 0m7.282s 00:04:12.770 user 0m1.658s 00:04:12.770 sys 0m2.784s 00:04:12.770 13:15:47 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.770 13:15:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:12.770 ************************************ 00:04:12.770 END TEST driver 00:04:12.770 ************************************ 00:04:12.770 13:15:47 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:12.770 13:15:47 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:12.770 13:15:47 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.770 13:15:47 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.770 13:15:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:12.770 ************************************ 00:04:12.770 START TEST devices 00:04:12.770 ************************************ 00:04:12.770 13:15:47 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:12.770 * Looking for test storage... 00:04:12.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:12.770 13:15:47 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:12.770 13:15:47 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:12.770 13:15:47 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.770 13:15:47 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:14.669 13:15:48 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:14.669 13:15:48 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:14.669 13:15:48 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:14.669 13:15:48 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.669 13:15:48 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:14.669 13:15:48 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:14.669 13:15:48 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:14.669 13:15:48 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:14.669 13:15:48 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:14.669 13:15:48 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:14.669 No valid GPT data, bailing 00:04:14.669 13:15:48 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:14.669 13:15:48 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:14.669 13:15:48 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:14.669 13:15:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:14.669 13:15:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:14.669 13:15:48 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:14.669 13:15:48 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:14.669 13:15:48 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.669 13:15:48 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.669 13:15:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:14.669 ************************************ 00:04:14.669 START TEST nvme_mount 00:04:14.669 ************************************ 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:14.669 13:15:49 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:15.603 Creating new GPT entries in memory. 00:04:15.603 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:15.603 other utilities. 00:04:15.603 13:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:15.603 13:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.603 13:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:15.603 13:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:15.603 13:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:16.536 Creating new GPT entries in memory. 00:04:16.536 The operation has completed successfully. 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 131031 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:16.536 13:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:16.537 13:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.537 13:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:16.537 13:15:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:16.537 13:15:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.537 13:15:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.470 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.728 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.728 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:17.728 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.728 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:17.728 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.728 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:17.728 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.728 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.728 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:17.728 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:17.728 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:17.728 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:17.728 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:17.986 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:17.986 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:17.986 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:17.986 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:17.986 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:17.986 13:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:17.986 13:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.986 13:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:17.986 13:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:17.986 13:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.246 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.246 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:18.246 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:18.246 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.246 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.246 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:18.246 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.246 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:18.246 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:18.246 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.246 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:18.246 13:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:18.246 13:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.246 13:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.186 13:15:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:20.561 13:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.561 13:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.561 13:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:20.561 13:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:20.561 13:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:20.561 13:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.561 13:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.561 13:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:20.561 13:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:20.561 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:20.561 00:04:20.561 real 0m6.112s 00:04:20.561 user 0m1.419s 00:04:20.561 sys 0m2.273s 00:04:20.561 13:15:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.561 13:15:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:20.561 ************************************ 00:04:20.561 END TEST nvme_mount 00:04:20.561 ************************************ 00:04:20.561 13:15:55 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:20.561 13:15:55 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:20.561 13:15:55 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.561 13:15:55 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.561 13:15:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:20.561 ************************************ 00:04:20.561 START TEST dm_mount 00:04:20.561 ************************************ 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:20.561 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:20.562 13:15:55 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:21.545 Creating new GPT entries in memory. 00:04:21.545 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:21.545 other utilities. 00:04:21.545 13:15:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:21.545 13:15:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.545 13:15:56 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:21.545 13:15:56 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:21.545 13:15:56 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:22.479 Creating new GPT entries in memory. 00:04:22.479 The operation has completed successfully. 00:04:22.479 13:15:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:22.479 13:15:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.479 13:15:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:22.479 13:15:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:22.479 13:15:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:23.853 The operation has completed successfully. 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 133412 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:23.853 13:15:58 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:23.854 13:15:58 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:23.854 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:23.854 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:23.854 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:23.854 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:23.854 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:23.854 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:23.854 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:23.854 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:23.854 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:23.854 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.854 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:23.854 13:15:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:23.854 13:15:58 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.854 13:15:58 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.787 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.788 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.788 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.788 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.788 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.788 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.788 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:24.788 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:25.046 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:25.046 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:25.046 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:25.046 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:25.046 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:25.046 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:25.046 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:25.046 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:25.046 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:25.046 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:25.046 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:25.046 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.046 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:25.046 13:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:25.046 13:15:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.046 13:15:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.980 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.981 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.981 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.981 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.981 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.981 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.981 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.981 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.981 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.981 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.981 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.981 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.981 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.981 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.981 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.981 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:25.981 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.240 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.240 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:26.240 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:26.240 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:26.240 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.240 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:26.240 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:26.240 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.240 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:26.240 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:26.240 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:26.240 13:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:26.240 00:04:26.240 real 0m5.686s 00:04:26.240 user 0m0.918s 00:04:26.240 sys 0m1.649s 00:04:26.240 13:16:00 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.240 13:16:00 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:26.240 ************************************ 00:04:26.240 END TEST dm_mount 00:04:26.240 ************************************ 00:04:26.240 13:16:00 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:26.240 13:16:00 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:26.240 13:16:00 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:26.240 13:16:00 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.240 13:16:00 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.240 13:16:00 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:26.240 13:16:00 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.240 13:16:00 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:26.499 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:26.499 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:26.499 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:26.499 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:26.499 13:16:01 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:26.499 13:16:01 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.499 13:16:01 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:26.499 13:16:01 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.499 13:16:01 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:26.499 13:16:01 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.499 13:16:01 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:26.499 00:04:26.499 real 0m13.765s 00:04:26.499 user 0m3.031s 00:04:26.499 sys 0m4.959s 00:04:26.499 13:16:01 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.499 13:16:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:26.499 ************************************ 00:04:26.499 END TEST devices 00:04:26.499 ************************************ 00:04:26.499 13:16:01 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:26.499 00:04:26.499 real 0m42.970s 00:04:26.499 user 0m12.269s 00:04:26.499 sys 0m18.973s 00:04:26.499 13:16:01 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.499 13:16:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:26.499 ************************************ 00:04:26.499 END TEST setup.sh 00:04:26.499 ************************************ 00:04:26.499 13:16:01 -- common/autotest_common.sh@1142 -- # return 0 00:04:26.499 13:16:01 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:27.875 Hugepages 00:04:27.875 node hugesize free / total 00:04:27.875 node0 1048576kB 0 / 0 00:04:27.875 node0 2048kB 2048 / 2048 00:04:27.875 node1 1048576kB 0 / 0 00:04:27.875 node1 2048kB 0 / 0 00:04:27.875 00:04:27.875 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:27.875 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:27.875 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:27.875 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:27.875 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:27.875 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:27.875 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:27.875 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:27.875 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:27.875 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:27.875 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:27.875 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:27.875 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:27.875 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:27.875 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:27.875 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:27.875 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:27.875 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:27.875 13:16:02 -- spdk/autotest.sh@130 -- # uname -s 00:04:27.875 13:16:02 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:27.875 13:16:02 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:27.875 13:16:02 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:28.807 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:28.807 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:29.065 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:29.065 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:29.065 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:29.065 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:29.065 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:29.065 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:29.065 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:29.065 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:29.065 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:29.065 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:29.065 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:29.065 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:29.065 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:29.065 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:29.997 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:30.255 13:16:04 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:31.189 13:16:05 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:31.189 13:16:05 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:31.189 13:16:05 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:31.189 13:16:05 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:31.189 13:16:05 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:31.189 13:16:05 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:31.189 13:16:05 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:31.189 13:16:05 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:31.189 13:16:05 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:31.189 13:16:05 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:31.189 13:16:05 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:31.189 13:16:05 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.121 Waiting for block devices as requested 00:04:32.378 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:32.378 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:32.635 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:32.635 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:32.635 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:32.635 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:32.891 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:32.891 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:32.891 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:32.891 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:33.148 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:33.148 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:33.148 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:33.148 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:33.405 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:33.405 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:33.405 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:33.661 13:16:08 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:33.661 13:16:08 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:33.661 13:16:08 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:33.661 13:16:08 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:04:33.661 13:16:08 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:33.661 13:16:08 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:33.661 13:16:08 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:33.661 13:16:08 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:33.661 13:16:08 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:33.661 13:16:08 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:33.661 13:16:08 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:33.661 13:16:08 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:33.662 13:16:08 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:33.662 13:16:08 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:33.662 13:16:08 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:33.662 13:16:08 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:33.662 13:16:08 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:33.662 13:16:08 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:33.662 13:16:08 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:33.662 13:16:08 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:33.662 13:16:08 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:33.662 13:16:08 -- common/autotest_common.sh@1557 -- # continue 00:04:33.662 13:16:08 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:33.662 13:16:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:33.662 13:16:08 -- common/autotest_common.sh@10 -- # set +x 00:04:33.662 13:16:08 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:33.662 13:16:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.662 13:16:08 -- common/autotest_common.sh@10 -- # set +x 00:04:33.662 13:16:08 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.030 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:35.030 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:35.030 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:35.030 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:35.030 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:35.030 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:35.030 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:35.030 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:35.030 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:35.030 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:35.030 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:35.030 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:35.030 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:35.030 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:35.030 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:35.030 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:35.962 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:35.962 13:16:10 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:35.962 13:16:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:35.962 13:16:10 -- common/autotest_common.sh@10 -- # set +x 00:04:35.962 13:16:10 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:35.962 13:16:10 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:35.962 13:16:10 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:35.962 13:16:10 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:35.962 13:16:10 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:35.962 13:16:10 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:35.962 13:16:10 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:35.962 13:16:10 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:35.962 13:16:10 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:35.962 13:16:10 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:35.962 13:16:10 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:35.962 13:16:10 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:35.962 13:16:10 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:35.962 13:16:10 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:35.962 13:16:10 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:35.962 13:16:10 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:35.962 13:16:10 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:35.962 13:16:10 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:35.962 13:16:10 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:35.962 13:16:10 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:35.962 13:16:10 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=138585 00:04:35.962 13:16:10 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.962 13:16:10 -- common/autotest_common.sh@1598 -- # waitforlisten 138585 00:04:35.962 13:16:10 -- common/autotest_common.sh@829 -- # '[' -z 138585 ']' 00:04:35.962 13:16:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.962 13:16:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.962 13:16:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.962 13:16:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.962 13:16:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.221 [2024-07-13 13:16:10.727913] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:36.221 [2024-07-13 13:16:10.728056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138585 ] 00:04:36.221 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.221 [2024-07-13 13:16:10.851017] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.505 [2024-07-13 13:16:11.109667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.441 13:16:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.441 13:16:11 -- common/autotest_common.sh@862 -- # return 0 00:04:37.441 13:16:11 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:37.441 13:16:11 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:37.441 13:16:11 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:40.724 nvme0n1 00:04:40.724 13:16:15 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:40.725 [2024-07-13 13:16:15.330946] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:40.725 [2024-07-13 13:16:15.331020] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:40.725 request: 00:04:40.725 { 00:04:40.725 "nvme_ctrlr_name": "nvme0", 00:04:40.725 "password": "test", 00:04:40.725 "method": "bdev_nvme_opal_revert", 00:04:40.725 "req_id": 1 00:04:40.725 } 00:04:40.725 Got JSON-RPC error response 00:04:40.725 response: 00:04:40.725 { 00:04:40.725 "code": -32603, 00:04:40.725 "message": "Internal error" 00:04:40.725 } 00:04:40.725 13:16:15 -- common/autotest_common.sh@1604 -- # true 00:04:40.725 13:16:15 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:40.725 13:16:15 -- common/autotest_common.sh@1608 -- # killprocess 138585 00:04:40.725 13:16:15 -- common/autotest_common.sh@948 -- # '[' -z 138585 ']' 00:04:40.725 13:16:15 -- common/autotest_common.sh@952 -- # kill -0 138585 00:04:40.725 13:16:15 -- common/autotest_common.sh@953 -- # uname 00:04:40.725 13:16:15 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.725 13:16:15 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 138585 00:04:40.725 13:16:15 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.725 13:16:15 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.725 13:16:15 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 138585' 00:04:40.725 killing process with pid 138585 00:04:40.725 13:16:15 -- common/autotest_common.sh@967 -- # kill 138585 00:04:40.725 13:16:15 -- common/autotest_common.sh@972 -- # wait 138585 00:04:44.906 13:16:19 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:44.906 13:16:19 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:44.906 13:16:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:44.906 13:16:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:44.906 13:16:19 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:44.906 13:16:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.906 13:16:19 -- common/autotest_common.sh@10 -- # set +x 00:04:44.906 13:16:19 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:44.906 13:16:19 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:44.906 13:16:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.906 13:16:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.906 13:16:19 -- common/autotest_common.sh@10 -- # set +x 00:04:44.906 ************************************ 00:04:44.906 START TEST env 00:04:44.906 ************************************ 00:04:44.906 13:16:19 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:44.906 * Looking for test storage... 00:04:44.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:44.906 13:16:19 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:44.906 13:16:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.906 13:16:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.906 13:16:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.906 ************************************ 00:04:44.906 START TEST env_memory 00:04:44.906 ************************************ 00:04:44.906 13:16:19 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:44.906 00:04:44.906 00:04:44.906 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.906 http://cunit.sourceforge.net/ 00:04:44.906 00:04:44.906 00:04:44.906 Suite: memory 00:04:44.906 Test: alloc and free memory map ...[2024-07-13 13:16:19.242935] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:44.906 passed 00:04:44.906 Test: mem map translation ...[2024-07-13 13:16:19.286531] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:44.906 [2024-07-13 13:16:19.286573] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:44.906 [2024-07-13 13:16:19.286646] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:44.906 [2024-07-13 13:16:19.286678] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:44.906 passed 00:04:44.907 Test: mem map registration ...[2024-07-13 13:16:19.355433] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:44.907 [2024-07-13 13:16:19.355474] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:44.907 passed 00:04:44.907 Test: mem map adjacent registrations ...passed 00:04:44.907 00:04:44.907 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.907 suites 1 1 n/a 0 0 00:04:44.907 tests 4 4 4 0 0 00:04:44.907 asserts 152 152 152 0 n/a 00:04:44.907 00:04:44.907 Elapsed time = 0.247 seconds 00:04:44.907 00:04:44.907 real 0m0.265s 00:04:44.907 user 0m0.249s 00:04:44.907 sys 0m0.015s 00:04:44.907 13:16:19 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.907 13:16:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:44.907 ************************************ 00:04:44.907 END TEST env_memory 00:04:44.907 ************************************ 00:04:44.907 13:16:19 env -- common/autotest_common.sh@1142 -- # return 0 00:04:44.907 13:16:19 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:44.907 13:16:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.907 13:16:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.907 13:16:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.907 ************************************ 00:04:44.907 START TEST env_vtophys 00:04:44.907 ************************************ 00:04:44.907 13:16:19 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:44.907 EAL: lib.eal log level changed from notice to debug 00:04:44.907 EAL: Detected lcore 0 as core 0 on socket 0 00:04:44.907 EAL: Detected lcore 1 as core 1 on socket 0 00:04:44.907 EAL: Detected lcore 2 as core 2 on socket 0 00:04:44.907 EAL: Detected lcore 3 as core 3 on socket 0 00:04:44.907 EAL: Detected lcore 4 as core 4 on socket 0 00:04:44.907 EAL: Detected lcore 5 as core 5 on socket 0 00:04:44.907 EAL: Detected lcore 6 as core 8 on socket 0 00:04:44.907 EAL: Detected lcore 7 as core 9 on socket 0 00:04:44.907 EAL: Detected lcore 8 as core 10 on socket 0 00:04:44.907 EAL: Detected lcore 9 as core 11 on socket 0 00:04:44.907 EAL: Detected lcore 10 as core 12 on socket 0 00:04:44.907 EAL: Detected lcore 11 as core 13 on socket 0 00:04:44.907 EAL: Detected lcore 12 as core 0 on socket 1 00:04:44.907 EAL: Detected lcore 13 as core 1 on socket 1 00:04:44.907 EAL: Detected lcore 14 as core 2 on socket 1 00:04:44.907 EAL: Detected lcore 15 as core 3 on socket 1 00:04:44.907 EAL: Detected lcore 16 as core 4 on socket 1 00:04:44.907 EAL: Detected lcore 17 as core 5 on socket 1 00:04:44.907 EAL: Detected lcore 18 as core 8 on socket 1 00:04:44.907 EAL: Detected lcore 19 as core 9 on socket 1 00:04:44.907 EAL: Detected lcore 20 as core 10 on socket 1 00:04:44.907 EAL: Detected lcore 21 as core 11 on socket 1 00:04:44.907 EAL: Detected lcore 22 as core 12 on socket 1 00:04:44.907 EAL: Detected lcore 23 as core 13 on socket 1 00:04:44.907 EAL: Detected lcore 24 as core 0 on socket 0 00:04:44.907 EAL: Detected lcore 25 as core 1 on socket 0 00:04:44.907 EAL: Detected lcore 26 as core 2 on socket 0 00:04:44.907 EAL: Detected lcore 27 as core 3 on socket 0 00:04:44.907 EAL: Detected lcore 28 as core 4 on socket 0 00:04:44.907 EAL: Detected lcore 29 as core 5 on socket 0 00:04:44.907 EAL: Detected lcore 30 as core 8 on socket 0 00:04:44.907 EAL: Detected lcore 31 as core 9 on socket 0 00:04:44.907 EAL: Detected lcore 32 as core 10 on socket 0 00:04:44.907 EAL: Detected lcore 33 as core 11 on socket 0 00:04:44.907 EAL: Detected lcore 34 as core 12 on socket 0 00:04:44.907 EAL: Detected lcore 35 as core 13 on socket 0 00:04:44.907 EAL: Detected lcore 36 as core 0 on socket 1 00:04:44.907 EAL: Detected lcore 37 as core 1 on socket 1 00:04:44.907 EAL: Detected lcore 38 as core 2 on socket 1 00:04:44.907 EAL: Detected lcore 39 as core 3 on socket 1 00:04:44.907 EAL: Detected lcore 40 as core 4 on socket 1 00:04:44.907 EAL: Detected lcore 41 as core 5 on socket 1 00:04:44.907 EAL: Detected lcore 42 as core 8 on socket 1 00:04:44.907 EAL: Detected lcore 43 as core 9 on socket 1 00:04:44.907 EAL: Detected lcore 44 as core 10 on socket 1 00:04:44.907 EAL: Detected lcore 45 as core 11 on socket 1 00:04:44.907 EAL: Detected lcore 46 as core 12 on socket 1 00:04:44.907 EAL: Detected lcore 47 as core 13 on socket 1 00:04:44.907 EAL: Maximum logical cores by configuration: 128 00:04:44.907 EAL: Detected CPU lcores: 48 00:04:44.907 EAL: Detected NUMA nodes: 2 00:04:44.907 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:44.907 EAL: Detected shared linkage of DPDK 00:04:44.907 EAL: No shared files mode enabled, IPC will be disabled 00:04:44.907 EAL: Bus pci wants IOVA as 'DC' 00:04:44.907 EAL: Buses did not request a specific IOVA mode. 00:04:44.907 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:44.907 EAL: Selected IOVA mode 'VA' 00:04:44.907 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.907 EAL: Probing VFIO support... 00:04:44.907 EAL: IOMMU type 1 (Type 1) is supported 00:04:44.907 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:44.907 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:44.907 EAL: VFIO support initialized 00:04:44.907 EAL: Ask a virtual area of 0x2e000 bytes 00:04:44.907 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:44.907 EAL: Setting up physically contiguous memory... 00:04:44.907 EAL: Setting maximum number of open files to 524288 00:04:44.907 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:44.907 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:44.907 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:44.907 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.907 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:44.907 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.907 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.907 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:44.907 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:44.907 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.907 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:44.907 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.907 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.907 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:44.907 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:44.907 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.907 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:44.907 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.907 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.907 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:44.907 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:44.907 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.907 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:44.907 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.907 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.907 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:44.907 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:44.907 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:44.907 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.907 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:44.907 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.907 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.907 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:44.907 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:44.907 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.907 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:44.907 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.907 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.907 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:44.907 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:44.907 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.907 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:44.907 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.907 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.907 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:44.907 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:44.907 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.907 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:44.907 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.907 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.907 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:44.907 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:44.907 EAL: Hugepages will be freed exactly as allocated. 00:04:44.907 EAL: No shared files mode enabled, IPC is disabled 00:04:44.907 EAL: No shared files mode enabled, IPC is disabled 00:04:44.907 EAL: TSC frequency is ~2700000 KHz 00:04:44.907 EAL: Main lcore 0 is ready (tid=7fc5a0c74a40;cpuset=[0]) 00:04:44.907 EAL: Trying to obtain current memory policy. 00:04:44.907 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.907 EAL: Restoring previous memory policy: 0 00:04:44.907 EAL: request: mp_malloc_sync 00:04:44.907 EAL: No shared files mode enabled, IPC is disabled 00:04:44.907 EAL: Heap on socket 0 was expanded by 2MB 00:04:44.907 EAL: No shared files mode enabled, IPC is disabled 00:04:45.165 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:45.165 EAL: Mem event callback 'spdk:(nil)' registered 00:04:45.165 00:04:45.165 00:04:45.165 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.165 http://cunit.sourceforge.net/ 00:04:45.165 00:04:45.165 00:04:45.165 Suite: components_suite 00:04:45.423 Test: vtophys_malloc_test ...passed 00:04:45.423 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:45.423 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.423 EAL: Restoring previous memory policy: 4 00:04:45.423 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.423 EAL: request: mp_malloc_sync 00:04:45.423 EAL: No shared files mode enabled, IPC is disabled 00:04:45.423 EAL: Heap on socket 0 was expanded by 4MB 00:04:45.423 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.423 EAL: request: mp_malloc_sync 00:04:45.423 EAL: No shared files mode enabled, IPC is disabled 00:04:45.423 EAL: Heap on socket 0 was shrunk by 4MB 00:04:45.423 EAL: Trying to obtain current memory policy. 00:04:45.423 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.423 EAL: Restoring previous memory policy: 4 00:04:45.423 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.423 EAL: request: mp_malloc_sync 00:04:45.423 EAL: No shared files mode enabled, IPC is disabled 00:04:45.423 EAL: Heap on socket 0 was expanded by 6MB 00:04:45.423 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.423 EAL: request: mp_malloc_sync 00:04:45.423 EAL: No shared files mode enabled, IPC is disabled 00:04:45.423 EAL: Heap on socket 0 was shrunk by 6MB 00:04:45.423 EAL: Trying to obtain current memory policy. 00:04:45.423 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.423 EAL: Restoring previous memory policy: 4 00:04:45.423 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.423 EAL: request: mp_malloc_sync 00:04:45.423 EAL: No shared files mode enabled, IPC is disabled 00:04:45.423 EAL: Heap on socket 0 was expanded by 10MB 00:04:45.423 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.423 EAL: request: mp_malloc_sync 00:04:45.423 EAL: No shared files mode enabled, IPC is disabled 00:04:45.423 EAL: Heap on socket 0 was shrunk by 10MB 00:04:45.423 EAL: Trying to obtain current memory policy. 00:04:45.423 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.423 EAL: Restoring previous memory policy: 4 00:04:45.423 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.423 EAL: request: mp_malloc_sync 00:04:45.423 EAL: No shared files mode enabled, IPC is disabled 00:04:45.423 EAL: Heap on socket 0 was expanded by 18MB 00:04:45.423 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.423 EAL: request: mp_malloc_sync 00:04:45.423 EAL: No shared files mode enabled, IPC is disabled 00:04:45.423 EAL: Heap on socket 0 was shrunk by 18MB 00:04:45.681 EAL: Trying to obtain current memory policy. 00:04:45.681 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.681 EAL: Restoring previous memory policy: 4 00:04:45.681 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.681 EAL: request: mp_malloc_sync 00:04:45.681 EAL: No shared files mode enabled, IPC is disabled 00:04:45.681 EAL: Heap on socket 0 was expanded by 34MB 00:04:45.681 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.681 EAL: request: mp_malloc_sync 00:04:45.681 EAL: No shared files mode enabled, IPC is disabled 00:04:45.681 EAL: Heap on socket 0 was shrunk by 34MB 00:04:45.681 EAL: Trying to obtain current memory policy. 00:04:45.681 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.681 EAL: Restoring previous memory policy: 4 00:04:45.681 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.681 EAL: request: mp_malloc_sync 00:04:45.681 EAL: No shared files mode enabled, IPC is disabled 00:04:45.681 EAL: Heap on socket 0 was expanded by 66MB 00:04:45.939 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.939 EAL: request: mp_malloc_sync 00:04:45.939 EAL: No shared files mode enabled, IPC is disabled 00:04:45.939 EAL: Heap on socket 0 was shrunk by 66MB 00:04:45.939 EAL: Trying to obtain current memory policy. 00:04:45.939 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.939 EAL: Restoring previous memory policy: 4 00:04:45.939 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.939 EAL: request: mp_malloc_sync 00:04:45.939 EAL: No shared files mode enabled, IPC is disabled 00:04:45.939 EAL: Heap on socket 0 was expanded by 130MB 00:04:46.197 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.197 EAL: request: mp_malloc_sync 00:04:46.197 EAL: No shared files mode enabled, IPC is disabled 00:04:46.197 EAL: Heap on socket 0 was shrunk by 130MB 00:04:46.455 EAL: Trying to obtain current memory policy. 00:04:46.455 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.455 EAL: Restoring previous memory policy: 4 00:04:46.455 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.455 EAL: request: mp_malloc_sync 00:04:46.455 EAL: No shared files mode enabled, IPC is disabled 00:04:46.455 EAL: Heap on socket 0 was expanded by 258MB 00:04:47.020 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.020 EAL: request: mp_malloc_sync 00:04:47.020 EAL: No shared files mode enabled, IPC is disabled 00:04:47.020 EAL: Heap on socket 0 was shrunk by 258MB 00:04:47.586 EAL: Trying to obtain current memory policy. 00:04:47.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.586 EAL: Restoring previous memory policy: 4 00:04:47.586 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.586 EAL: request: mp_malloc_sync 00:04:47.586 EAL: No shared files mode enabled, IPC is disabled 00:04:47.586 EAL: Heap on socket 0 was expanded by 514MB 00:04:48.519 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.519 EAL: request: mp_malloc_sync 00:04:48.519 EAL: No shared files mode enabled, IPC is disabled 00:04:48.519 EAL: Heap on socket 0 was shrunk by 514MB 00:04:49.451 EAL: Trying to obtain current memory policy. 00:04:49.451 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.709 EAL: Restoring previous memory policy: 4 00:04:49.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.709 EAL: request: mp_malloc_sync 00:04:49.709 EAL: No shared files mode enabled, IPC is disabled 00:04:49.709 EAL: Heap on socket 0 was expanded by 1026MB 00:04:51.609 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.867 EAL: request: mp_malloc_sync 00:04:51.867 EAL: No shared files mode enabled, IPC is disabled 00:04:51.867 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:53.767 passed 00:04:53.767 00:04:53.767 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.767 suites 1 1 n/a 0 0 00:04:53.767 tests 2 2 2 0 0 00:04:53.767 asserts 497 497 497 0 n/a 00:04:53.767 00:04:53.767 Elapsed time = 8.294 seconds 00:04:53.767 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.767 EAL: request: mp_malloc_sync 00:04:53.767 EAL: No shared files mode enabled, IPC is disabled 00:04:53.767 EAL: Heap on socket 0 was shrunk by 2MB 00:04:53.767 EAL: No shared files mode enabled, IPC is disabled 00:04:53.767 EAL: No shared files mode enabled, IPC is disabled 00:04:53.767 EAL: No shared files mode enabled, IPC is disabled 00:04:53.767 00:04:53.767 real 0m8.558s 00:04:53.767 user 0m7.440s 00:04:53.767 sys 0m1.054s 00:04:53.767 13:16:28 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.767 13:16:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:53.767 ************************************ 00:04:53.767 END TEST env_vtophys 00:04:53.767 ************************************ 00:04:53.767 13:16:28 env -- common/autotest_common.sh@1142 -- # return 0 00:04:53.767 13:16:28 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:53.767 13:16:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.767 13:16:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.767 13:16:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.767 ************************************ 00:04:53.767 START TEST env_pci 00:04:53.767 ************************************ 00:04:53.767 13:16:28 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:53.767 00:04:53.767 00:04:53.767 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.767 http://cunit.sourceforge.net/ 00:04:53.767 00:04:53.767 00:04:53.767 Suite: pci 00:04:53.767 Test: pci_hook ...[2024-07-13 13:16:28.135045] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 140671 has claimed it 00:04:53.767 EAL: Cannot find device (10000:00:01.0) 00:04:53.767 EAL: Failed to attach device on primary process 00:04:53.767 passed 00:04:53.767 00:04:53.767 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.767 suites 1 1 n/a 0 0 00:04:53.767 tests 1 1 1 0 0 00:04:53.767 asserts 25 25 25 0 n/a 00:04:53.767 00:04:53.767 Elapsed time = 0.045 seconds 00:04:53.767 00:04:53.767 real 0m0.096s 00:04:53.767 user 0m0.043s 00:04:53.767 sys 0m0.052s 00:04:53.767 13:16:28 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.767 13:16:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:53.767 ************************************ 00:04:53.767 END TEST env_pci 00:04:53.767 ************************************ 00:04:53.767 13:16:28 env -- common/autotest_common.sh@1142 -- # return 0 00:04:53.767 13:16:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:53.767 13:16:28 env -- env/env.sh@15 -- # uname 00:04:53.767 13:16:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:53.767 13:16:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:53.767 13:16:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:53.767 13:16:28 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:53.767 13:16:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.767 13:16:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.767 ************************************ 00:04:53.767 START TEST env_dpdk_post_init 00:04:53.767 ************************************ 00:04:53.767 13:16:28 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:53.767 EAL: Detected CPU lcores: 48 00:04:53.767 EAL: Detected NUMA nodes: 2 00:04:53.767 EAL: Detected shared linkage of DPDK 00:04:53.767 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:53.767 EAL: Selected IOVA mode 'VA' 00:04:53.767 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.767 EAL: VFIO support initialized 00:04:53.767 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:53.767 EAL: Using IOMMU type 1 (Type 1) 00:04:53.767 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:53.767 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:54.025 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:54.025 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:54.025 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:54.025 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:54.025 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:54.025 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:54.025 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:54.025 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:54.025 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:54.025 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:54.025 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:54.025 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:54.025 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:54.025 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:54.985 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:58.269 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:58.269 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:58.269 Starting DPDK initialization... 00:04:58.269 Starting SPDK post initialization... 00:04:58.269 SPDK NVMe probe 00:04:58.269 Attaching to 0000:88:00.0 00:04:58.269 Attached to 0000:88:00.0 00:04:58.269 Cleaning up... 00:04:58.269 00:04:58.269 real 0m4.597s 00:04:58.269 user 0m3.406s 00:04:58.269 sys 0m0.249s 00:04:58.269 13:16:32 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.269 13:16:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.269 ************************************ 00:04:58.269 END TEST env_dpdk_post_init 00:04:58.269 ************************************ 00:04:58.269 13:16:32 env -- common/autotest_common.sh@1142 -- # return 0 00:04:58.269 13:16:32 env -- env/env.sh@26 -- # uname 00:04:58.269 13:16:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:58.269 13:16:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:58.269 13:16:32 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.269 13:16:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.269 13:16:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.269 ************************************ 00:04:58.269 START TEST env_mem_callbacks 00:04:58.269 ************************************ 00:04:58.269 13:16:32 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:58.269 EAL: Detected CPU lcores: 48 00:04:58.269 EAL: Detected NUMA nodes: 2 00:04:58.269 EAL: Detected shared linkage of DPDK 00:04:58.269 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:58.269 EAL: Selected IOVA mode 'VA' 00:04:58.269 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.269 EAL: VFIO support initialized 00:04:58.269 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:58.269 00:04:58.269 00:04:58.269 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.269 http://cunit.sourceforge.net/ 00:04:58.269 00:04:58.269 00:04:58.269 Suite: memory 00:04:58.269 Test: test ... 00:04:58.269 register 0x200000200000 2097152 00:04:58.269 malloc 3145728 00:04:58.269 register 0x200000400000 4194304 00:04:58.269 buf 0x2000004fffc0 len 3145728 PASSED 00:04:58.269 malloc 64 00:04:58.269 buf 0x2000004ffec0 len 64 PASSED 00:04:58.269 malloc 4194304 00:04:58.269 register 0x200000800000 6291456 00:04:58.269 buf 0x2000009fffc0 len 4194304 PASSED 00:04:58.269 free 0x2000004fffc0 3145728 00:04:58.533 free 0x2000004ffec0 64 00:04:58.533 unregister 0x200000400000 4194304 PASSED 00:04:58.533 free 0x2000009fffc0 4194304 00:04:58.533 unregister 0x200000800000 6291456 PASSED 00:04:58.533 malloc 8388608 00:04:58.533 register 0x200000400000 10485760 00:04:58.533 buf 0x2000005fffc0 len 8388608 PASSED 00:04:58.533 free 0x2000005fffc0 8388608 00:04:58.533 unregister 0x200000400000 10485760 PASSED 00:04:58.533 passed 00:04:58.533 00:04:58.533 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.533 suites 1 1 n/a 0 0 00:04:58.533 tests 1 1 1 0 0 00:04:58.533 asserts 15 15 15 0 n/a 00:04:58.533 00:04:58.533 Elapsed time = 0.060 seconds 00:04:58.533 00:04:58.533 real 0m0.178s 00:04:58.533 user 0m0.085s 00:04:58.533 sys 0m0.093s 00:04:58.533 13:16:33 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.533 13:16:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:58.533 ************************************ 00:04:58.533 END TEST env_mem_callbacks 00:04:58.533 ************************************ 00:04:58.533 13:16:33 env -- common/autotest_common.sh@1142 -- # return 0 00:04:58.533 00:04:58.533 real 0m13.976s 00:04:58.533 user 0m11.332s 00:04:58.533 sys 0m1.657s 00:04:58.533 13:16:33 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.533 13:16:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.533 ************************************ 00:04:58.533 END TEST env 00:04:58.533 ************************************ 00:04:58.533 13:16:33 -- common/autotest_common.sh@1142 -- # return 0 00:04:58.533 13:16:33 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:58.533 13:16:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.533 13:16:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.533 13:16:33 -- common/autotest_common.sh@10 -- # set +x 00:04:58.533 ************************************ 00:04:58.533 START TEST rpc 00:04:58.533 ************************************ 00:04:58.533 13:16:33 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:58.533 * Looking for test storage... 00:04:58.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:58.533 13:16:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=141453 00:04:58.533 13:16:33 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:58.533 13:16:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.533 13:16:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 141453 00:04:58.533 13:16:33 rpc -- common/autotest_common.sh@829 -- # '[' -z 141453 ']' 00:04:58.533 13:16:33 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.533 13:16:33 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.533 13:16:33 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.533 13:16:33 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.533 13:16:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.790 [2024-07-13 13:16:33.284484] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:58.790 [2024-07-13 13:16:33.284627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141453 ] 00:04:58.790 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.790 [2024-07-13 13:16:33.406421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.048 [2024-07-13 13:16:33.658631] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:59.048 [2024-07-13 13:16:33.658723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 141453' to capture a snapshot of events at runtime. 00:04:59.048 [2024-07-13 13:16:33.658749] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:59.048 [2024-07-13 13:16:33.658777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:59.048 [2024-07-13 13:16:33.658797] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid141453 for offline analysis/debug. 00:04:59.048 [2024-07-13 13:16:33.658851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.980 13:16:34 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.980 13:16:34 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:59.980 13:16:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.980 13:16:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.981 13:16:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:59.981 13:16:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:59.981 13:16:34 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.981 13:16:34 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.981 13:16:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.981 ************************************ 00:04:59.981 START TEST rpc_integrity 00:04:59.981 ************************************ 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.981 { 00:04:59.981 "name": "Malloc0", 00:04:59.981 "aliases": [ 00:04:59.981 "75d13783-ba15-4a8d-9583-1f605bb58cc4" 00:04:59.981 ], 00:04:59.981 "product_name": "Malloc disk", 00:04:59.981 "block_size": 512, 00:04:59.981 "num_blocks": 16384, 00:04:59.981 "uuid": "75d13783-ba15-4a8d-9583-1f605bb58cc4", 00:04:59.981 "assigned_rate_limits": { 00:04:59.981 "rw_ios_per_sec": 0, 00:04:59.981 "rw_mbytes_per_sec": 0, 00:04:59.981 "r_mbytes_per_sec": 0, 00:04:59.981 "w_mbytes_per_sec": 0 00:04:59.981 }, 00:04:59.981 "claimed": false, 00:04:59.981 "zoned": false, 00:04:59.981 "supported_io_types": { 00:04:59.981 "read": true, 00:04:59.981 "write": true, 00:04:59.981 "unmap": true, 00:04:59.981 "flush": true, 00:04:59.981 "reset": true, 00:04:59.981 "nvme_admin": false, 00:04:59.981 "nvme_io": false, 00:04:59.981 "nvme_io_md": false, 00:04:59.981 "write_zeroes": true, 00:04:59.981 "zcopy": true, 00:04:59.981 "get_zone_info": false, 00:04:59.981 "zone_management": false, 00:04:59.981 "zone_append": false, 00:04:59.981 "compare": false, 00:04:59.981 "compare_and_write": false, 00:04:59.981 "abort": true, 00:04:59.981 "seek_hole": false, 00:04:59.981 "seek_data": false, 00:04:59.981 "copy": true, 00:04:59.981 "nvme_iov_md": false 00:04:59.981 }, 00:04:59.981 "memory_domains": [ 00:04:59.981 { 00:04:59.981 "dma_device_id": "system", 00:04:59.981 "dma_device_type": 1 00:04:59.981 }, 00:04:59.981 { 00:04:59.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.981 "dma_device_type": 2 00:04:59.981 } 00:04:59.981 ], 00:04:59.981 "driver_specific": {} 00:04:59.981 } 00:04:59.981 ]' 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.981 [2024-07-13 13:16:34.654234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:59.981 [2024-07-13 13:16:34.654313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.981 [2024-07-13 13:16:34.654361] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:59.981 [2024-07-13 13:16:34.654392] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.981 [2024-07-13 13:16:34.657106] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.981 [2024-07-13 13:16:34.657168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.981 Passthru0 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.981 { 00:04:59.981 "name": "Malloc0", 00:04:59.981 "aliases": [ 00:04:59.981 "75d13783-ba15-4a8d-9583-1f605bb58cc4" 00:04:59.981 ], 00:04:59.981 "product_name": "Malloc disk", 00:04:59.981 "block_size": 512, 00:04:59.981 "num_blocks": 16384, 00:04:59.981 "uuid": "75d13783-ba15-4a8d-9583-1f605bb58cc4", 00:04:59.981 "assigned_rate_limits": { 00:04:59.981 "rw_ios_per_sec": 0, 00:04:59.981 "rw_mbytes_per_sec": 0, 00:04:59.981 "r_mbytes_per_sec": 0, 00:04:59.981 "w_mbytes_per_sec": 0 00:04:59.981 }, 00:04:59.981 "claimed": true, 00:04:59.981 "claim_type": "exclusive_write", 00:04:59.981 "zoned": false, 00:04:59.981 "supported_io_types": { 00:04:59.981 "read": true, 00:04:59.981 "write": true, 00:04:59.981 "unmap": true, 00:04:59.981 "flush": true, 00:04:59.981 "reset": true, 00:04:59.981 "nvme_admin": false, 00:04:59.981 "nvme_io": false, 00:04:59.981 "nvme_io_md": false, 00:04:59.981 "write_zeroes": true, 00:04:59.981 "zcopy": true, 00:04:59.981 "get_zone_info": false, 00:04:59.981 "zone_management": false, 00:04:59.981 "zone_append": false, 00:04:59.981 "compare": false, 00:04:59.981 "compare_and_write": false, 00:04:59.981 "abort": true, 00:04:59.981 "seek_hole": false, 00:04:59.981 "seek_data": false, 00:04:59.981 "copy": true, 00:04:59.981 "nvme_iov_md": false 00:04:59.981 }, 00:04:59.981 "memory_domains": [ 00:04:59.981 { 00:04:59.981 "dma_device_id": "system", 00:04:59.981 "dma_device_type": 1 00:04:59.981 }, 00:04:59.981 { 00:04:59.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.981 "dma_device_type": 2 00:04:59.981 } 00:04:59.981 ], 00:04:59.981 "driver_specific": {} 00:04:59.981 }, 00:04:59.981 { 00:04:59.981 "name": "Passthru0", 00:04:59.981 "aliases": [ 00:04:59.981 "a1403cd9-0483-51c7-b99e-6a6611d4c82f" 00:04:59.981 ], 00:04:59.981 "product_name": "passthru", 00:04:59.981 "block_size": 512, 00:04:59.981 "num_blocks": 16384, 00:04:59.981 "uuid": "a1403cd9-0483-51c7-b99e-6a6611d4c82f", 00:04:59.981 "assigned_rate_limits": { 00:04:59.981 "rw_ios_per_sec": 0, 00:04:59.981 "rw_mbytes_per_sec": 0, 00:04:59.981 "r_mbytes_per_sec": 0, 00:04:59.981 "w_mbytes_per_sec": 0 00:04:59.981 }, 00:04:59.981 "claimed": false, 00:04:59.981 "zoned": false, 00:04:59.981 "supported_io_types": { 00:04:59.981 "read": true, 00:04:59.981 "write": true, 00:04:59.981 "unmap": true, 00:04:59.981 "flush": true, 00:04:59.981 "reset": true, 00:04:59.981 "nvme_admin": false, 00:04:59.981 "nvme_io": false, 00:04:59.981 "nvme_io_md": false, 00:04:59.981 "write_zeroes": true, 00:04:59.981 "zcopy": true, 00:04:59.981 "get_zone_info": false, 00:04:59.981 "zone_management": false, 00:04:59.981 "zone_append": false, 00:04:59.981 "compare": false, 00:04:59.981 "compare_and_write": false, 00:04:59.981 "abort": true, 00:04:59.981 "seek_hole": false, 00:04:59.981 "seek_data": false, 00:04:59.981 "copy": true, 00:04:59.981 "nvme_iov_md": false 00:04:59.981 }, 00:04:59.981 "memory_domains": [ 00:04:59.981 { 00:04:59.981 "dma_device_id": "system", 00:04:59.981 "dma_device_type": 1 00:04:59.981 }, 00:04:59.981 { 00:04:59.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.981 "dma_device_type": 2 00:04:59.981 } 00:04:59.981 ], 00:04:59.981 "driver_specific": { 00:04:59.981 "passthru": { 00:04:59.981 "name": "Passthru0", 00:04:59.981 "base_bdev_name": "Malloc0" 00:04:59.981 } 00:04:59.981 } 00:04:59.981 } 00:04:59.981 ]' 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.981 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.981 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.240 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.240 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:00.240 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.240 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.240 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.240 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:00.240 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:00.240 13:16:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:00.240 00:05:00.240 real 0m0.264s 00:05:00.240 user 0m0.151s 00:05:00.240 sys 0m0.024s 00:05:00.240 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.240 13:16:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.240 ************************************ 00:05:00.240 END TEST rpc_integrity 00:05:00.240 ************************************ 00:05:00.240 13:16:34 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:00.240 13:16:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:00.240 13:16:34 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.240 13:16:34 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.240 13:16:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.240 ************************************ 00:05:00.240 START TEST rpc_plugins 00:05:00.240 ************************************ 00:05:00.240 13:16:34 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:00.240 13:16:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:00.240 13:16:34 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.240 13:16:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.240 13:16:34 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.240 13:16:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:00.240 13:16:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:00.240 13:16:34 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.240 13:16:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.240 13:16:34 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.240 13:16:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:00.240 { 00:05:00.240 "name": "Malloc1", 00:05:00.240 "aliases": [ 00:05:00.240 "72a2c642-ebd4-4bab-a90e-bbe7d085211d" 00:05:00.240 ], 00:05:00.240 "product_name": "Malloc disk", 00:05:00.240 "block_size": 4096, 00:05:00.240 "num_blocks": 256, 00:05:00.240 "uuid": "72a2c642-ebd4-4bab-a90e-bbe7d085211d", 00:05:00.240 "assigned_rate_limits": { 00:05:00.240 "rw_ios_per_sec": 0, 00:05:00.240 "rw_mbytes_per_sec": 0, 00:05:00.240 "r_mbytes_per_sec": 0, 00:05:00.240 "w_mbytes_per_sec": 0 00:05:00.240 }, 00:05:00.240 "claimed": false, 00:05:00.240 "zoned": false, 00:05:00.240 "supported_io_types": { 00:05:00.240 "read": true, 00:05:00.240 "write": true, 00:05:00.240 "unmap": true, 00:05:00.240 "flush": true, 00:05:00.240 "reset": true, 00:05:00.240 "nvme_admin": false, 00:05:00.240 "nvme_io": false, 00:05:00.240 "nvme_io_md": false, 00:05:00.240 "write_zeroes": true, 00:05:00.240 "zcopy": true, 00:05:00.240 "get_zone_info": false, 00:05:00.240 "zone_management": false, 00:05:00.240 "zone_append": false, 00:05:00.240 "compare": false, 00:05:00.240 "compare_and_write": false, 00:05:00.240 "abort": true, 00:05:00.240 "seek_hole": false, 00:05:00.240 "seek_data": false, 00:05:00.240 "copy": true, 00:05:00.240 "nvme_iov_md": false 00:05:00.240 }, 00:05:00.240 "memory_domains": [ 00:05:00.240 { 00:05:00.240 "dma_device_id": "system", 00:05:00.240 "dma_device_type": 1 00:05:00.240 }, 00:05:00.240 { 00:05:00.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.240 "dma_device_type": 2 00:05:00.240 } 00:05:00.240 ], 00:05:00.240 "driver_specific": {} 00:05:00.240 } 00:05:00.240 ]' 00:05:00.240 13:16:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:00.240 13:16:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:00.240 13:16:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:00.240 13:16:34 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.240 13:16:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.240 13:16:34 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.240 13:16:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:00.240 13:16:34 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.240 13:16:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.240 13:16:34 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.240 13:16:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:00.240 13:16:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:00.240 13:16:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:00.240 00:05:00.240 real 0m0.120s 00:05:00.240 user 0m0.076s 00:05:00.240 sys 0m0.012s 00:05:00.240 13:16:34 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.240 13:16:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.240 ************************************ 00:05:00.240 END TEST rpc_plugins 00:05:00.240 ************************************ 00:05:00.498 13:16:34 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:00.498 13:16:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:00.498 13:16:34 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.498 13:16:34 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.498 13:16:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.498 ************************************ 00:05:00.498 START TEST rpc_trace_cmd_test 00:05:00.498 ************************************ 00:05:00.498 13:16:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:00.498 13:16:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:00.498 13:16:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:00.498 13:16:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.498 13:16:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:00.498 13:16:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.498 13:16:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:00.498 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid141453", 00:05:00.498 "tpoint_group_mask": "0x8", 00:05:00.498 "iscsi_conn": { 00:05:00.498 "mask": "0x2", 00:05:00.498 "tpoint_mask": "0x0" 00:05:00.498 }, 00:05:00.498 "scsi": { 00:05:00.498 "mask": "0x4", 00:05:00.498 "tpoint_mask": "0x0" 00:05:00.498 }, 00:05:00.498 "bdev": { 00:05:00.498 "mask": "0x8", 00:05:00.498 "tpoint_mask": "0xffffffffffffffff" 00:05:00.498 }, 00:05:00.498 "nvmf_rdma": { 00:05:00.498 "mask": "0x10", 00:05:00.498 "tpoint_mask": "0x0" 00:05:00.498 }, 00:05:00.498 "nvmf_tcp": { 00:05:00.498 "mask": "0x20", 00:05:00.498 "tpoint_mask": "0x0" 00:05:00.498 }, 00:05:00.498 "ftl": { 00:05:00.498 "mask": "0x40", 00:05:00.498 "tpoint_mask": "0x0" 00:05:00.498 }, 00:05:00.498 "blobfs": { 00:05:00.498 "mask": "0x80", 00:05:00.498 "tpoint_mask": "0x0" 00:05:00.498 }, 00:05:00.498 "dsa": { 00:05:00.498 "mask": "0x200", 00:05:00.498 "tpoint_mask": "0x0" 00:05:00.498 }, 00:05:00.498 "thread": { 00:05:00.498 "mask": "0x400", 00:05:00.498 "tpoint_mask": "0x0" 00:05:00.498 }, 00:05:00.498 "nvme_pcie": { 00:05:00.498 "mask": "0x800", 00:05:00.498 "tpoint_mask": "0x0" 00:05:00.498 }, 00:05:00.498 "iaa": { 00:05:00.498 "mask": "0x1000", 00:05:00.498 "tpoint_mask": "0x0" 00:05:00.498 }, 00:05:00.498 "nvme_tcp": { 00:05:00.498 "mask": "0x2000", 00:05:00.498 "tpoint_mask": "0x0" 00:05:00.498 }, 00:05:00.498 "bdev_nvme": { 00:05:00.498 "mask": "0x4000", 00:05:00.498 "tpoint_mask": "0x0" 00:05:00.498 }, 00:05:00.498 "sock": { 00:05:00.498 "mask": "0x8000", 00:05:00.498 "tpoint_mask": "0x0" 00:05:00.498 } 00:05:00.499 }' 00:05:00.499 13:16:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:00.499 13:16:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:00.499 13:16:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:00.499 13:16:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:00.499 13:16:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:00.499 13:16:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:00.499 13:16:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:00.499 13:16:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:00.499 13:16:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:00.499 13:16:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:00.499 00:05:00.499 real 0m0.203s 00:05:00.499 user 0m0.177s 00:05:00.499 sys 0m0.018s 00:05:00.499 13:16:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.499 13:16:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:00.499 ************************************ 00:05:00.499 END TEST rpc_trace_cmd_test 00:05:00.499 ************************************ 00:05:00.499 13:16:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:00.499 13:16:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:00.499 13:16:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:00.499 13:16:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:00.499 13:16:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.499 13:16:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.499 13:16:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.758 ************************************ 00:05:00.758 START TEST rpc_daemon_integrity 00:05:00.758 ************************************ 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:00.758 { 00:05:00.758 "name": "Malloc2", 00:05:00.758 "aliases": [ 00:05:00.758 "e3c33c45-a1b9-4eab-a6ab-680b5944f0cf" 00:05:00.758 ], 00:05:00.758 "product_name": "Malloc disk", 00:05:00.758 "block_size": 512, 00:05:00.758 "num_blocks": 16384, 00:05:00.758 "uuid": "e3c33c45-a1b9-4eab-a6ab-680b5944f0cf", 00:05:00.758 "assigned_rate_limits": { 00:05:00.758 "rw_ios_per_sec": 0, 00:05:00.758 "rw_mbytes_per_sec": 0, 00:05:00.758 "r_mbytes_per_sec": 0, 00:05:00.758 "w_mbytes_per_sec": 0 00:05:00.758 }, 00:05:00.758 "claimed": false, 00:05:00.758 "zoned": false, 00:05:00.758 "supported_io_types": { 00:05:00.758 "read": true, 00:05:00.758 "write": true, 00:05:00.758 "unmap": true, 00:05:00.758 "flush": true, 00:05:00.758 "reset": true, 00:05:00.758 "nvme_admin": false, 00:05:00.758 "nvme_io": false, 00:05:00.758 "nvme_io_md": false, 00:05:00.758 "write_zeroes": true, 00:05:00.758 "zcopy": true, 00:05:00.758 "get_zone_info": false, 00:05:00.758 "zone_management": false, 00:05:00.758 "zone_append": false, 00:05:00.758 "compare": false, 00:05:00.758 "compare_and_write": false, 00:05:00.758 "abort": true, 00:05:00.758 "seek_hole": false, 00:05:00.758 "seek_data": false, 00:05:00.758 "copy": true, 00:05:00.758 "nvme_iov_md": false 00:05:00.758 }, 00:05:00.758 "memory_domains": [ 00:05:00.758 { 00:05:00.758 "dma_device_id": "system", 00:05:00.758 "dma_device_type": 1 00:05:00.758 }, 00:05:00.758 { 00:05:00.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.758 "dma_device_type": 2 00:05:00.758 } 00:05:00.758 ], 00:05:00.758 "driver_specific": {} 00:05:00.758 } 00:05:00.758 ]' 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.758 [2024-07-13 13:16:35.379775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:00.758 [2024-07-13 13:16:35.379845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:00.758 [2024-07-13 13:16:35.379910] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:05:00.758 [2024-07-13 13:16:35.379939] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:00.758 [2024-07-13 13:16:35.382528] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:00.758 [2024-07-13 13:16:35.382571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:00.758 Passthru0 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.758 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:00.758 { 00:05:00.758 "name": "Malloc2", 00:05:00.758 "aliases": [ 00:05:00.758 "e3c33c45-a1b9-4eab-a6ab-680b5944f0cf" 00:05:00.758 ], 00:05:00.758 "product_name": "Malloc disk", 00:05:00.758 "block_size": 512, 00:05:00.758 "num_blocks": 16384, 00:05:00.758 "uuid": "e3c33c45-a1b9-4eab-a6ab-680b5944f0cf", 00:05:00.758 "assigned_rate_limits": { 00:05:00.758 "rw_ios_per_sec": 0, 00:05:00.758 "rw_mbytes_per_sec": 0, 00:05:00.758 "r_mbytes_per_sec": 0, 00:05:00.758 "w_mbytes_per_sec": 0 00:05:00.758 }, 00:05:00.758 "claimed": true, 00:05:00.758 "claim_type": "exclusive_write", 00:05:00.758 "zoned": false, 00:05:00.758 "supported_io_types": { 00:05:00.758 "read": true, 00:05:00.758 "write": true, 00:05:00.758 "unmap": true, 00:05:00.758 "flush": true, 00:05:00.758 "reset": true, 00:05:00.758 "nvme_admin": false, 00:05:00.758 "nvme_io": false, 00:05:00.758 "nvme_io_md": false, 00:05:00.758 "write_zeroes": true, 00:05:00.758 "zcopy": true, 00:05:00.758 "get_zone_info": false, 00:05:00.758 "zone_management": false, 00:05:00.758 "zone_append": false, 00:05:00.758 "compare": false, 00:05:00.758 "compare_and_write": false, 00:05:00.758 "abort": true, 00:05:00.758 "seek_hole": false, 00:05:00.758 "seek_data": false, 00:05:00.758 "copy": true, 00:05:00.758 "nvme_iov_md": false 00:05:00.758 }, 00:05:00.758 "memory_domains": [ 00:05:00.758 { 00:05:00.758 "dma_device_id": "system", 00:05:00.758 "dma_device_type": 1 00:05:00.758 }, 00:05:00.758 { 00:05:00.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.758 "dma_device_type": 2 00:05:00.758 } 00:05:00.758 ], 00:05:00.758 "driver_specific": {} 00:05:00.758 }, 00:05:00.758 { 00:05:00.758 "name": "Passthru0", 00:05:00.758 "aliases": [ 00:05:00.758 "63f15f92-561b-56f1-b96a-754669cb8dfb" 00:05:00.758 ], 00:05:00.758 "product_name": "passthru", 00:05:00.758 "block_size": 512, 00:05:00.758 "num_blocks": 16384, 00:05:00.758 "uuid": "63f15f92-561b-56f1-b96a-754669cb8dfb", 00:05:00.758 "assigned_rate_limits": { 00:05:00.758 "rw_ios_per_sec": 0, 00:05:00.758 "rw_mbytes_per_sec": 0, 00:05:00.758 "r_mbytes_per_sec": 0, 00:05:00.758 "w_mbytes_per_sec": 0 00:05:00.758 }, 00:05:00.758 "claimed": false, 00:05:00.758 "zoned": false, 00:05:00.758 "supported_io_types": { 00:05:00.758 "read": true, 00:05:00.758 "write": true, 00:05:00.758 "unmap": true, 00:05:00.758 "flush": true, 00:05:00.758 "reset": true, 00:05:00.758 "nvme_admin": false, 00:05:00.759 "nvme_io": false, 00:05:00.759 "nvme_io_md": false, 00:05:00.759 "write_zeroes": true, 00:05:00.759 "zcopy": true, 00:05:00.759 "get_zone_info": false, 00:05:00.759 "zone_management": false, 00:05:00.759 "zone_append": false, 00:05:00.759 "compare": false, 00:05:00.759 "compare_and_write": false, 00:05:00.759 "abort": true, 00:05:00.759 "seek_hole": false, 00:05:00.759 "seek_data": false, 00:05:00.759 "copy": true, 00:05:00.759 "nvme_iov_md": false 00:05:00.759 }, 00:05:00.759 "memory_domains": [ 00:05:00.759 { 00:05:00.759 "dma_device_id": "system", 00:05:00.759 "dma_device_type": 1 00:05:00.759 }, 00:05:00.759 { 00:05:00.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.759 "dma_device_type": 2 00:05:00.759 } 00:05:00.759 ], 00:05:00.759 "driver_specific": { 00:05:00.759 "passthru": { 00:05:00.759 "name": "Passthru0", 00:05:00.759 "base_bdev_name": "Malloc2" 00:05:00.759 } 00:05:00.759 } 00:05:00.759 } 00:05:00.759 ]' 00:05:00.759 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:00.759 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:00.759 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:00.759 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.759 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.759 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.759 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:00.759 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.759 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.759 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.759 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:00.759 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.759 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.759 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.759 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:00.759 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:01.018 13:16:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.018 00:05:01.018 real 0m0.254s 00:05:01.018 user 0m0.152s 00:05:01.018 sys 0m0.021s 00:05:01.018 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.018 13:16:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.018 ************************************ 00:05:01.018 END TEST rpc_daemon_integrity 00:05:01.018 ************************************ 00:05:01.018 13:16:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:01.018 13:16:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:01.018 13:16:35 rpc -- rpc/rpc.sh@84 -- # killprocess 141453 00:05:01.018 13:16:35 rpc -- common/autotest_common.sh@948 -- # '[' -z 141453 ']' 00:05:01.018 13:16:35 rpc -- common/autotest_common.sh@952 -- # kill -0 141453 00:05:01.018 13:16:35 rpc -- common/autotest_common.sh@953 -- # uname 00:05:01.018 13:16:35 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.018 13:16:35 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 141453 00:05:01.018 13:16:35 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.018 13:16:35 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.018 13:16:35 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 141453' 00:05:01.018 killing process with pid 141453 00:05:01.018 13:16:35 rpc -- common/autotest_common.sh@967 -- # kill 141453 00:05:01.018 13:16:35 rpc -- common/autotest_common.sh@972 -- # wait 141453 00:05:03.551 00:05:03.551 real 0m4.917s 00:05:03.551 user 0m5.429s 00:05:03.551 sys 0m0.793s 00:05:03.551 13:16:38 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.551 13:16:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.551 ************************************ 00:05:03.551 END TEST rpc 00:05:03.551 ************************************ 00:05:03.551 13:16:38 -- common/autotest_common.sh@1142 -- # return 0 00:05:03.551 13:16:38 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:03.551 13:16:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.551 13:16:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.551 13:16:38 -- common/autotest_common.sh@10 -- # set +x 00:05:03.551 ************************************ 00:05:03.551 START TEST skip_rpc 00:05:03.551 ************************************ 00:05:03.551 13:16:38 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:03.551 * Looking for test storage... 00:05:03.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:03.551 13:16:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:03.551 13:16:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:03.551 13:16:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:03.551 13:16:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.551 13:16:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.551 13:16:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.551 ************************************ 00:05:03.551 START TEST skip_rpc 00:05:03.551 ************************************ 00:05:03.551 13:16:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:03.551 13:16:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=142171 00:05:03.551 13:16:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:03.551 13:16:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.551 13:16:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:03.551 [2024-07-13 13:16:38.292073] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:03.551 [2024-07-13 13:16:38.292237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142171 ] 00:05:03.810 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.810 [2024-07-13 13:16:38.435710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.069 [2024-07-13 13:16:38.695315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 142171 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 142171 ']' 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 142171 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 142171 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 142171' 00:05:09.326 killing process with pid 142171 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 142171 00:05:09.326 13:16:43 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 142171 00:05:11.223 00:05:11.223 real 0m7.521s 00:05:11.223 user 0m6.990s 00:05:11.223 sys 0m0.513s 00:05:11.223 13:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.223 13:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.223 ************************************ 00:05:11.223 END TEST skip_rpc 00:05:11.223 ************************************ 00:05:11.223 13:16:45 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:11.223 13:16:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:11.223 13:16:45 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.223 13:16:45 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.223 13:16:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.223 ************************************ 00:05:11.223 START TEST skip_rpc_with_json 00:05:11.223 ************************************ 00:05:11.223 13:16:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:11.223 13:16:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:11.223 13:16:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=143124 00:05:11.223 13:16:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.223 13:16:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.223 13:16:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 143124 00:05:11.223 13:16:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 143124 ']' 00:05:11.223 13:16:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.223 13:16:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.223 13:16:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.223 13:16:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.223 13:16:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.223 [2024-07-13 13:16:45.853119] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:11.223 [2024-07-13 13:16:45.853273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143124 ] 00:05:11.223 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.480 [2024-07-13 13:16:45.978088] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.737 [2024-07-13 13:16:46.230154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.670 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.670 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:12.671 13:16:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:12.671 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.671 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.671 [2024-07-13 13:16:47.098919] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:12.671 request: 00:05:12.671 { 00:05:12.671 "trtype": "tcp", 00:05:12.671 "method": "nvmf_get_transports", 00:05:12.671 "req_id": 1 00:05:12.671 } 00:05:12.671 Got JSON-RPC error response 00:05:12.671 response: 00:05:12.671 { 00:05:12.671 "code": -19, 00:05:12.671 "message": "No such device" 00:05:12.671 } 00:05:12.671 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:12.671 13:16:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:12.671 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.671 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.671 [2024-07-13 13:16:47.107059] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.671 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.671 13:16:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:12.671 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.671 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:12.671 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.671 13:16:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:12.671 { 00:05:12.671 "subsystems": [ 00:05:12.671 { 00:05:12.671 "subsystem": "keyring", 00:05:12.671 "config": [] 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "subsystem": "iobuf", 00:05:12.671 "config": [ 00:05:12.671 { 00:05:12.671 "method": "iobuf_set_options", 00:05:12.671 "params": { 00:05:12.671 "small_pool_count": 8192, 00:05:12.671 "large_pool_count": 1024, 00:05:12.671 "small_bufsize": 8192, 00:05:12.671 "large_bufsize": 135168 00:05:12.671 } 00:05:12.671 } 00:05:12.671 ] 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "subsystem": "sock", 00:05:12.671 "config": [ 00:05:12.671 { 00:05:12.671 "method": "sock_set_default_impl", 00:05:12.671 "params": { 00:05:12.671 "impl_name": "posix" 00:05:12.671 } 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "method": "sock_impl_set_options", 00:05:12.671 "params": { 00:05:12.671 "impl_name": "ssl", 00:05:12.671 "recv_buf_size": 4096, 00:05:12.671 "send_buf_size": 4096, 00:05:12.671 "enable_recv_pipe": true, 00:05:12.671 "enable_quickack": false, 00:05:12.671 "enable_placement_id": 0, 00:05:12.671 "enable_zerocopy_send_server": true, 00:05:12.671 "enable_zerocopy_send_client": false, 00:05:12.671 "zerocopy_threshold": 0, 00:05:12.671 "tls_version": 0, 00:05:12.671 "enable_ktls": false 00:05:12.671 } 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "method": "sock_impl_set_options", 00:05:12.671 "params": { 00:05:12.671 "impl_name": "posix", 00:05:12.671 "recv_buf_size": 2097152, 00:05:12.671 "send_buf_size": 2097152, 00:05:12.671 "enable_recv_pipe": true, 00:05:12.671 "enable_quickack": false, 00:05:12.671 "enable_placement_id": 0, 00:05:12.671 "enable_zerocopy_send_server": true, 00:05:12.671 "enable_zerocopy_send_client": false, 00:05:12.671 "zerocopy_threshold": 0, 00:05:12.671 "tls_version": 0, 00:05:12.671 "enable_ktls": false 00:05:12.671 } 00:05:12.671 } 00:05:12.671 ] 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "subsystem": "vmd", 00:05:12.671 "config": [] 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "subsystem": "accel", 00:05:12.671 "config": [ 00:05:12.671 { 00:05:12.671 "method": "accel_set_options", 00:05:12.671 "params": { 00:05:12.671 "small_cache_size": 128, 00:05:12.671 "large_cache_size": 16, 00:05:12.671 "task_count": 2048, 00:05:12.671 "sequence_count": 2048, 00:05:12.671 "buf_count": 2048 00:05:12.671 } 00:05:12.671 } 00:05:12.671 ] 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "subsystem": "bdev", 00:05:12.671 "config": [ 00:05:12.671 { 00:05:12.671 "method": "bdev_set_options", 00:05:12.671 "params": { 00:05:12.671 "bdev_io_pool_size": 65535, 00:05:12.671 "bdev_io_cache_size": 256, 00:05:12.671 "bdev_auto_examine": true, 00:05:12.671 "iobuf_small_cache_size": 128, 00:05:12.671 "iobuf_large_cache_size": 16 00:05:12.671 } 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "method": "bdev_raid_set_options", 00:05:12.671 "params": { 00:05:12.671 "process_window_size_kb": 1024 00:05:12.671 } 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "method": "bdev_iscsi_set_options", 00:05:12.671 "params": { 00:05:12.671 "timeout_sec": 30 00:05:12.671 } 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "method": "bdev_nvme_set_options", 00:05:12.671 "params": { 00:05:12.671 "action_on_timeout": "none", 00:05:12.671 "timeout_us": 0, 00:05:12.671 "timeout_admin_us": 0, 00:05:12.671 "keep_alive_timeout_ms": 10000, 00:05:12.671 "arbitration_burst": 0, 00:05:12.671 "low_priority_weight": 0, 00:05:12.671 "medium_priority_weight": 0, 00:05:12.671 "high_priority_weight": 0, 00:05:12.671 "nvme_adminq_poll_period_us": 10000, 00:05:12.671 "nvme_ioq_poll_period_us": 0, 00:05:12.671 "io_queue_requests": 0, 00:05:12.671 "delay_cmd_submit": true, 00:05:12.671 "transport_retry_count": 4, 00:05:12.671 "bdev_retry_count": 3, 00:05:12.671 "transport_ack_timeout": 0, 00:05:12.671 "ctrlr_loss_timeout_sec": 0, 00:05:12.671 "reconnect_delay_sec": 0, 00:05:12.671 "fast_io_fail_timeout_sec": 0, 00:05:12.671 "disable_auto_failback": false, 00:05:12.671 "generate_uuids": false, 00:05:12.671 "transport_tos": 0, 00:05:12.671 "nvme_error_stat": false, 00:05:12.671 "rdma_srq_size": 0, 00:05:12.671 "io_path_stat": false, 00:05:12.671 "allow_accel_sequence": false, 00:05:12.671 "rdma_max_cq_size": 0, 00:05:12.671 "rdma_cm_event_timeout_ms": 0, 00:05:12.671 "dhchap_digests": [ 00:05:12.671 "sha256", 00:05:12.671 "sha384", 00:05:12.671 "sha512" 00:05:12.671 ], 00:05:12.671 "dhchap_dhgroups": [ 00:05:12.671 "null", 00:05:12.671 "ffdhe2048", 00:05:12.671 "ffdhe3072", 00:05:12.671 "ffdhe4096", 00:05:12.671 "ffdhe6144", 00:05:12.671 "ffdhe8192" 00:05:12.671 ] 00:05:12.671 } 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "method": "bdev_nvme_set_hotplug", 00:05:12.671 "params": { 00:05:12.671 "period_us": 100000, 00:05:12.671 "enable": false 00:05:12.671 } 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "method": "bdev_wait_for_examine" 00:05:12.671 } 00:05:12.671 ] 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "subsystem": "scsi", 00:05:12.671 "config": null 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "subsystem": "scheduler", 00:05:12.671 "config": [ 00:05:12.671 { 00:05:12.671 "method": "framework_set_scheduler", 00:05:12.671 "params": { 00:05:12.671 "name": "static" 00:05:12.671 } 00:05:12.671 } 00:05:12.671 ] 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "subsystem": "vhost_scsi", 00:05:12.671 "config": [] 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "subsystem": "vhost_blk", 00:05:12.671 "config": [] 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "subsystem": "ublk", 00:05:12.671 "config": [] 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "subsystem": "nbd", 00:05:12.671 "config": [] 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "subsystem": "nvmf", 00:05:12.671 "config": [ 00:05:12.671 { 00:05:12.671 "method": "nvmf_set_config", 00:05:12.671 "params": { 00:05:12.671 "discovery_filter": "match_any", 00:05:12.671 "admin_cmd_passthru": { 00:05:12.671 "identify_ctrlr": false 00:05:12.671 } 00:05:12.671 } 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "method": "nvmf_set_max_subsystems", 00:05:12.671 "params": { 00:05:12.671 "max_subsystems": 1024 00:05:12.671 } 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "method": "nvmf_set_crdt", 00:05:12.671 "params": { 00:05:12.671 "crdt1": 0, 00:05:12.671 "crdt2": 0, 00:05:12.671 "crdt3": 0 00:05:12.671 } 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "method": "nvmf_create_transport", 00:05:12.671 "params": { 00:05:12.671 "trtype": "TCP", 00:05:12.671 "max_queue_depth": 128, 00:05:12.671 "max_io_qpairs_per_ctrlr": 127, 00:05:12.671 "in_capsule_data_size": 4096, 00:05:12.671 "max_io_size": 131072, 00:05:12.671 "io_unit_size": 131072, 00:05:12.671 "max_aq_depth": 128, 00:05:12.671 "num_shared_buffers": 511, 00:05:12.671 "buf_cache_size": 4294967295, 00:05:12.671 "dif_insert_or_strip": false, 00:05:12.671 "zcopy": false, 00:05:12.671 "c2h_success": true, 00:05:12.671 "sock_priority": 0, 00:05:12.671 "abort_timeout_sec": 1, 00:05:12.671 "ack_timeout": 0, 00:05:12.671 "data_wr_pool_size": 0 00:05:12.671 } 00:05:12.671 } 00:05:12.671 ] 00:05:12.671 }, 00:05:12.671 { 00:05:12.671 "subsystem": "iscsi", 00:05:12.671 "config": [ 00:05:12.671 { 00:05:12.671 "method": "iscsi_set_options", 00:05:12.671 "params": { 00:05:12.671 "node_base": "iqn.2016-06.io.spdk", 00:05:12.671 "max_sessions": 128, 00:05:12.671 "max_connections_per_session": 2, 00:05:12.671 "max_queue_depth": 64, 00:05:12.671 "default_time2wait": 2, 00:05:12.671 "default_time2retain": 20, 00:05:12.671 "first_burst_length": 8192, 00:05:12.671 "immediate_data": true, 00:05:12.671 "allow_duplicated_isid": false, 00:05:12.671 "error_recovery_level": 0, 00:05:12.671 "nop_timeout": 60, 00:05:12.672 "nop_in_interval": 30, 00:05:12.672 "disable_chap": false, 00:05:12.672 "require_chap": false, 00:05:12.672 "mutual_chap": false, 00:05:12.672 "chap_group": 0, 00:05:12.672 "max_large_datain_per_connection": 64, 00:05:12.672 "max_r2t_per_connection": 4, 00:05:12.672 "pdu_pool_size": 36864, 00:05:12.672 "immediate_data_pool_size": 16384, 00:05:12.672 "data_out_pool_size": 2048 00:05:12.672 } 00:05:12.672 } 00:05:12.672 ] 00:05:12.672 } 00:05:12.672 ] 00:05:12.672 } 00:05:12.672 13:16:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:12.672 13:16:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 143124 00:05:12.672 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 143124 ']' 00:05:12.672 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 143124 00:05:12.672 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:12.672 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.672 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 143124 00:05:12.672 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.672 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.672 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 143124' 00:05:12.672 killing process with pid 143124 00:05:12.672 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 143124 00:05:12.672 13:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 143124 00:05:15.236 13:16:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=143549 00:05:15.236 13:16:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:15.236 13:16:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:20.494 13:16:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 143549 00:05:20.494 13:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 143549 ']' 00:05:20.494 13:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 143549 00:05:20.494 13:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:20.494 13:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.494 13:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 143549 00:05:20.494 13:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.494 13:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.494 13:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 143549' 00:05:20.494 killing process with pid 143549 00:05:20.494 13:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 143549 00:05:20.494 13:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 143549 00:05:23.021 13:16:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:23.021 13:16:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:23.021 00:05:23.021 real 0m11.561s 00:05:23.021 user 0m10.995s 00:05:23.021 sys 0m1.071s 00:05:23.021 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.021 13:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.021 ************************************ 00:05:23.021 END TEST skip_rpc_with_json 00:05:23.021 ************************************ 00:05:23.021 13:16:57 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:23.021 13:16:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:23.021 13:16:57 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.021 13:16:57 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.021 13:16:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.021 ************************************ 00:05:23.021 START TEST skip_rpc_with_delay 00:05:23.022 ************************************ 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.022 [2024-07-13 13:16:57.456241] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:23.022 [2024-07-13 13:16:57.456424] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.022 00:05:23.022 real 0m0.140s 00:05:23.022 user 0m0.084s 00:05:23.022 sys 0m0.056s 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.022 13:16:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:23.022 ************************************ 00:05:23.022 END TEST skip_rpc_with_delay 00:05:23.022 ************************************ 00:05:23.022 13:16:57 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:23.022 13:16:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:23.022 13:16:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:23.022 13:16:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:23.022 13:16:57 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.022 13:16:57 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.022 13:16:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.022 ************************************ 00:05:23.022 START TEST exit_on_failed_rpc_init 00:05:23.022 ************************************ 00:05:23.022 13:16:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:23.022 13:16:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=144533 00:05:23.022 13:16:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.022 13:16:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 144533 00:05:23.022 13:16:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 144533 ']' 00:05:23.022 13:16:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.022 13:16:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.022 13:16:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.022 13:16:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.022 13:16:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.022 [2024-07-13 13:16:57.645898] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:23.022 [2024-07-13 13:16:57.646063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144533 ] 00:05:23.022 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.280 [2024-07-13 13:16:57.771499] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.280 [2024-07-13 13:16:58.023328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.215 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.215 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:24.215 13:16:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.215 13:16:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.215 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:24.215 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.215 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.215 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.215 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.215 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.215 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.215 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.216 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.216 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:24.216 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.474 [2024-07-13 13:16:59.002147] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:24.474 [2024-07-13 13:16:59.002311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144678 ] 00:05:24.474 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.474 [2024-07-13 13:16:59.133250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.733 [2024-07-13 13:16:59.387029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.733 [2024-07-13 13:16:59.387197] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:24.733 [2024-07-13 13:16:59.387228] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:24.733 [2024-07-13 13:16:59.387248] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 144533 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 144533 ']' 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 144533 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 144533 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 144533' 00:05:25.299 killing process with pid 144533 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 144533 00:05:25.299 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 144533 00:05:27.829 00:05:27.829 real 0m4.798s 00:05:27.829 user 0m5.500s 00:05:27.829 sys 0m0.714s 00:05:27.829 13:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.829 13:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.829 ************************************ 00:05:27.829 END TEST exit_on_failed_rpc_init 00:05:27.829 ************************************ 00:05:27.829 13:17:02 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:27.829 13:17:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:27.829 00:05:27.829 real 0m24.260s 00:05:27.829 user 0m23.667s 00:05:27.829 sys 0m2.512s 00:05:27.829 13:17:02 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.829 13:17:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.829 ************************************ 00:05:27.829 END TEST skip_rpc 00:05:27.829 ************************************ 00:05:27.829 13:17:02 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.829 13:17:02 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:27.829 13:17:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.829 13:17:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.829 13:17:02 -- common/autotest_common.sh@10 -- # set +x 00:05:27.829 ************************************ 00:05:27.829 START TEST rpc_client 00:05:27.829 ************************************ 00:05:27.829 13:17:02 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:27.829 * Looking for test storage... 00:05:27.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:27.829 13:17:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:27.829 OK 00:05:27.829 13:17:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:27.829 00:05:27.829 real 0m0.099s 00:05:27.829 user 0m0.046s 00:05:27.829 sys 0m0.059s 00:05:27.829 13:17:02 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.829 13:17:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:27.829 ************************************ 00:05:27.829 END TEST rpc_client 00:05:27.829 ************************************ 00:05:27.829 13:17:02 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.829 13:17:02 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:27.829 13:17:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.829 13:17:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.829 13:17:02 -- common/autotest_common.sh@10 -- # set +x 00:05:27.829 ************************************ 00:05:27.829 START TEST json_config 00:05:27.829 ************************************ 00:05:27.829 13:17:02 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:28.088 13:17:02 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.088 13:17:02 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.088 13:17:02 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.088 13:17:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.088 13:17:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.088 13:17:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.088 13:17:02 json_config -- paths/export.sh@5 -- # export PATH 00:05:28.088 13:17:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@47 -- # : 0 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:28.088 13:17:02 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:28.088 INFO: JSON configuration test init 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:28.088 13:17:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.088 13:17:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:28.088 13:17:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.088 13:17:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.088 13:17:02 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:28.088 13:17:02 json_config -- json_config/common.sh@9 -- # local app=target 00:05:28.088 13:17:02 json_config -- json_config/common.sh@10 -- # shift 00:05:28.088 13:17:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.088 13:17:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.088 13:17:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.088 13:17:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.088 13:17:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.088 13:17:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=145304 00:05:28.088 13:17:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:28.088 13:17:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.088 Waiting for target to run... 00:05:28.088 13:17:02 json_config -- json_config/common.sh@25 -- # waitforlisten 145304 /var/tmp/spdk_tgt.sock 00:05:28.088 13:17:02 json_config -- common/autotest_common.sh@829 -- # '[' -z 145304 ']' 00:05:28.088 13:17:02 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.088 13:17:02 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.088 13:17:02 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.088 13:17:02 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.088 13:17:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.088 [2024-07-13 13:17:02.707463] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:28.088 [2024-07-13 13:17:02.707614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145304 ] 00:05:28.088 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.659 [2024-07-13 13:17:03.280178] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.917 [2024-07-13 13:17:03.506418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.917 13:17:03 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.917 13:17:03 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:28.917 13:17:03 json_config -- json_config/common.sh@26 -- # echo '' 00:05:28.917 00:05:28.917 13:17:03 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:28.917 13:17:03 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:28.917 13:17:03 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.917 13:17:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.917 13:17:03 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:28.917 13:17:03 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:28.917 13:17:03 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.917 13:17:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.917 13:17:03 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:28.917 13:17:03 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:28.917 13:17:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:33.100 13:17:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.100 13:17:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:33.100 13:17:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:33.100 13:17:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.100 13:17:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:33.100 13:17:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.100 13:17:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:33.100 13:17:07 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.100 13:17:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.358 MallocForNvmf0 00:05:33.358 13:17:07 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.358 13:17:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.615 MallocForNvmf1 00:05:33.615 13:17:08 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.615 13:17:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.873 [2024-07-13 13:17:08.450165] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.873 13:17:08 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.873 13:17:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.132 13:17:08 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.132 13:17:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.390 13:17:08 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.390 13:17:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.683 13:17:09 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.683 13:17:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.941 [2024-07-13 13:17:09.425621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:34.942 13:17:09 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:34.942 13:17:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.942 13:17:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.942 13:17:09 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:34.942 13:17:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.942 13:17:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.942 13:17:09 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:34.942 13:17:09 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.942 13:17:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:35.199 MallocBdevForConfigChangeCheck 00:05:35.199 13:17:09 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:35.199 13:17:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.199 13:17:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.199 13:17:09 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:35.199 13:17:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.457 13:17:10 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:35.457 INFO: shutting down applications... 00:05:35.457 13:17:10 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:35.457 13:17:10 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:35.457 13:17:10 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:35.457 13:17:10 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:37.358 Calling clear_iscsi_subsystem 00:05:37.358 Calling clear_nvmf_subsystem 00:05:37.358 Calling clear_nbd_subsystem 00:05:37.358 Calling clear_ublk_subsystem 00:05:37.358 Calling clear_vhost_blk_subsystem 00:05:37.358 Calling clear_vhost_scsi_subsystem 00:05:37.358 Calling clear_bdev_subsystem 00:05:37.358 13:17:11 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:37.358 13:17:11 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:37.358 13:17:11 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:37.358 13:17:11 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.358 13:17:11 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:37.358 13:17:11 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:37.616 13:17:12 json_config -- json_config/json_config.sh@345 -- # break 00:05:37.616 13:17:12 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:37.616 13:17:12 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:37.616 13:17:12 json_config -- json_config/common.sh@31 -- # local app=target 00:05:37.616 13:17:12 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:37.616 13:17:12 json_config -- json_config/common.sh@35 -- # [[ -n 145304 ]] 00:05:37.616 13:17:12 json_config -- json_config/common.sh@38 -- # kill -SIGINT 145304 00:05:37.616 13:17:12 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:37.616 13:17:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.616 13:17:12 json_config -- json_config/common.sh@41 -- # kill -0 145304 00:05:37.616 13:17:12 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.182 13:17:12 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.182 13:17:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.182 13:17:12 json_config -- json_config/common.sh@41 -- # kill -0 145304 00:05:38.182 13:17:12 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.440 13:17:13 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.440 13:17:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.440 13:17:13 json_config -- json_config/common.sh@41 -- # kill -0 145304 00:05:38.440 13:17:13 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.005 13:17:13 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.005 13:17:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.005 13:17:13 json_config -- json_config/common.sh@41 -- # kill -0 145304 00:05:39.005 13:17:13 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:39.005 13:17:13 json_config -- json_config/common.sh@43 -- # break 00:05:39.005 13:17:13 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:39.005 13:17:13 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:39.005 SPDK target shutdown done 00:05:39.005 13:17:13 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:39.005 INFO: relaunching applications... 00:05:39.005 13:17:13 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.005 13:17:13 json_config -- json_config/common.sh@9 -- # local app=target 00:05:39.005 13:17:13 json_config -- json_config/common.sh@10 -- # shift 00:05:39.005 13:17:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:39.005 13:17:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:39.005 13:17:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:39.006 13:17:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.006 13:17:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.006 13:17:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=146647 00:05:39.006 13:17:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:39.006 13:17:13 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.006 Waiting for target to run... 00:05:39.006 13:17:13 json_config -- json_config/common.sh@25 -- # waitforlisten 146647 /var/tmp/spdk_tgt.sock 00:05:39.006 13:17:13 json_config -- common/autotest_common.sh@829 -- # '[' -z 146647 ']' 00:05:39.006 13:17:13 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.006 13:17:13 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.006 13:17:13 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.006 13:17:13 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.006 13:17:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.006 [2024-07-13 13:17:13.742534] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:39.006 [2024-07-13 13:17:13.742680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146647 ] 00:05:39.264 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.830 [2024-07-13 13:17:14.342288] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.087 [2024-07-13 13:17:14.579711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.292 [2024-07-13 13:17:18.295331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:44.292 [2024-07-13 13:17:18.327846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:44.292 13:17:18 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.292 13:17:18 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:44.292 13:17:18 json_config -- json_config/common.sh@26 -- # echo '' 00:05:44.292 00:05:44.292 13:17:18 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:44.292 13:17:18 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:44.292 INFO: Checking if target configuration is the same... 00:05:44.292 13:17:18 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.292 13:17:18 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:44.292 13:17:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:44.292 + '[' 2 -ne 2 ']' 00:05:44.292 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:44.292 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:44.292 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:44.292 +++ basename /dev/fd/62 00:05:44.292 ++ mktemp /tmp/62.XXX 00:05:44.292 + tmp_file_1=/tmp/62.s5q 00:05:44.292 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.292 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:44.292 + tmp_file_2=/tmp/spdk_tgt_config.json.gdq 00:05:44.292 + ret=0 00:05:44.292 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:44.548 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:44.548 + diff -u /tmp/62.s5q /tmp/spdk_tgt_config.json.gdq 00:05:44.548 + echo 'INFO: JSON config files are the same' 00:05:44.549 INFO: JSON config files are the same 00:05:44.549 + rm /tmp/62.s5q /tmp/spdk_tgt_config.json.gdq 00:05:44.549 + exit 0 00:05:44.549 13:17:19 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:44.549 13:17:19 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:44.549 INFO: changing configuration and checking if this can be detected... 00:05:44.549 13:17:19 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:44.549 13:17:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:44.806 13:17:19 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.806 13:17:19 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:44.806 13:17:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:44.806 + '[' 2 -ne 2 ']' 00:05:44.806 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:44.806 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:44.806 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:44.806 +++ basename /dev/fd/62 00:05:44.806 ++ mktemp /tmp/62.XXX 00:05:44.806 + tmp_file_1=/tmp/62.GNb 00:05:44.806 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.806 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:44.806 + tmp_file_2=/tmp/spdk_tgt_config.json.tna 00:05:44.806 + ret=0 00:05:44.806 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:45.369 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:45.369 + diff -u /tmp/62.GNb /tmp/spdk_tgt_config.json.tna 00:05:45.369 + ret=1 00:05:45.369 + echo '=== Start of file: /tmp/62.GNb ===' 00:05:45.369 + cat /tmp/62.GNb 00:05:45.369 + echo '=== End of file: /tmp/62.GNb ===' 00:05:45.369 + echo '' 00:05:45.369 + echo '=== Start of file: /tmp/spdk_tgt_config.json.tna ===' 00:05:45.369 + cat /tmp/spdk_tgt_config.json.tna 00:05:45.369 + echo '=== End of file: /tmp/spdk_tgt_config.json.tna ===' 00:05:45.369 + echo '' 00:05:45.369 + rm /tmp/62.GNb /tmp/spdk_tgt_config.json.tna 00:05:45.369 + exit 1 00:05:45.370 13:17:19 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:45.370 INFO: configuration change detected. 00:05:45.370 13:17:19 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:45.370 13:17:19 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:45.370 13:17:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:45.370 13:17:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.370 13:17:19 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:45.370 13:17:19 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:45.370 13:17:19 json_config -- json_config/json_config.sh@317 -- # [[ -n 146647 ]] 00:05:45.370 13:17:19 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:45.370 13:17:19 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:45.370 13:17:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:45.370 13:17:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.370 13:17:19 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:45.370 13:17:19 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:45.370 13:17:19 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:45.370 13:17:19 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:45.370 13:17:19 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:45.370 13:17:19 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:45.370 13:17:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.370 13:17:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.370 13:17:19 json_config -- json_config/json_config.sh@323 -- # killprocess 146647 00:05:45.370 13:17:19 json_config -- common/autotest_common.sh@948 -- # '[' -z 146647 ']' 00:05:45.370 13:17:19 json_config -- common/autotest_common.sh@952 -- # kill -0 146647 00:05:45.370 13:17:19 json_config -- common/autotest_common.sh@953 -- # uname 00:05:45.370 13:17:19 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.370 13:17:19 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 146647 00:05:45.370 13:17:20 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.370 13:17:20 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.370 13:17:20 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 146647' 00:05:45.370 killing process with pid 146647 00:05:45.370 13:17:20 json_config -- common/autotest_common.sh@967 -- # kill 146647 00:05:45.370 13:17:20 json_config -- common/autotest_common.sh@972 -- # wait 146647 00:05:47.896 13:17:22 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:47.896 13:17:22 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:47.896 13:17:22 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.896 13:17:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.896 13:17:22 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:47.896 13:17:22 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:47.896 INFO: Success 00:05:47.896 00:05:47.896 real 0m19.900s 00:05:47.896 user 0m21.328s 00:05:47.896 sys 0m2.569s 00:05:47.896 13:17:22 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.896 13:17:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.897 ************************************ 00:05:47.897 END TEST json_config 00:05:47.897 ************************************ 00:05:47.897 13:17:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.897 13:17:22 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:47.897 13:17:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.897 13:17:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.897 13:17:22 -- common/autotest_common.sh@10 -- # set +x 00:05:47.897 ************************************ 00:05:47.897 START TEST json_config_extra_key 00:05:47.897 ************************************ 00:05:47.897 13:17:22 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:47.897 13:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.897 13:17:22 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.897 13:17:22 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.897 13:17:22 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.897 13:17:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.897 13:17:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.897 13:17:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.897 13:17:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:47.897 13:17:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:47.897 13:17:22 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:47.897 13:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:47.897 13:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:47.897 13:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:47.897 13:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:47.897 13:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:47.897 13:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:47.897 13:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:47.897 13:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:47.897 13:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:47.897 13:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:47.897 13:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:47.897 INFO: launching applications... 00:05:47.897 13:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:47.897 13:17:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:47.897 13:17:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:47.897 13:17:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:47.897 13:17:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:47.897 13:17:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:47.897 13:17:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.897 13:17:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.897 13:17:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=147819 00:05:47.897 13:17:22 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:47.897 13:17:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:47.897 Waiting for target to run... 00:05:47.897 13:17:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 147819 /var/tmp/spdk_tgt.sock 00:05:47.897 13:17:22 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 147819 ']' 00:05:47.897 13:17:22 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:47.897 13:17:22 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.898 13:17:22 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:47.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:47.898 13:17:22 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.898 13:17:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:48.156 [2024-07-13 13:17:22.653441] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:48.156 [2024-07-13 13:17:22.653583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147819 ] 00:05:48.156 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.721 [2024-07-13 13:17:23.244947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.978 [2024-07-13 13:17:23.479980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.543 13:17:24 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.543 13:17:24 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:49.543 13:17:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:49.543 00:05:49.543 13:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:49.543 INFO: shutting down applications... 00:05:49.543 13:17:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:49.543 13:17:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:49.543 13:17:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:49.543 13:17:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 147819 ]] 00:05:49.543 13:17:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 147819 00:05:49.543 13:17:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:49.543 13:17:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.543 13:17:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 147819 00:05:49.543 13:17:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.107 13:17:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.107 13:17:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.107 13:17:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 147819 00:05:50.107 13:17:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.672 13:17:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.672 13:17:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.672 13:17:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 147819 00:05:50.672 13:17:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.255 13:17:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.255 13:17:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.255 13:17:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 147819 00:05:51.255 13:17:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.513 13:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.513 13:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.513 13:17:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 147819 00:05:51.513 13:17:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.079 13:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.079 13:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.079 13:17:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 147819 00:05:52.079 13:17:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.645 13:17:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.645 13:17:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.645 13:17:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 147819 00:05:52.645 13:17:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:52.645 13:17:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:52.645 13:17:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:52.645 13:17:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:52.645 SPDK target shutdown done 00:05:52.645 13:17:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:52.645 Success 00:05:52.645 00:05:52.645 real 0m4.702s 00:05:52.645 user 0m4.253s 00:05:52.645 sys 0m0.822s 00:05:52.645 13:17:27 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.645 13:17:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:52.645 ************************************ 00:05:52.645 END TEST json_config_extra_key 00:05:52.645 ************************************ 00:05:52.645 13:17:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:52.645 13:17:27 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:52.645 13:17:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.645 13:17:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.645 13:17:27 -- common/autotest_common.sh@10 -- # set +x 00:05:52.645 ************************************ 00:05:52.645 START TEST alias_rpc 00:05:52.645 ************************************ 00:05:52.645 13:17:27 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:52.645 * Looking for test storage... 00:05:52.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:52.645 13:17:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:52.645 13:17:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=148527 00:05:52.645 13:17:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.645 13:17:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 148527 00:05:52.645 13:17:27 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 148527 ']' 00:05:52.645 13:17:27 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.645 13:17:27 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.645 13:17:27 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.645 13:17:27 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.645 13:17:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.903 [2024-07-13 13:17:27.401798] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:52.903 [2024-07-13 13:17:27.401981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148527 ] 00:05:52.903 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.903 [2024-07-13 13:17:27.523595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.162 [2024-07-13 13:17:27.781124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.095 13:17:28 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.095 13:17:28 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:54.095 13:17:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:54.353 13:17:28 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 148527 00:05:54.353 13:17:28 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 148527 ']' 00:05:54.353 13:17:28 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 148527 00:05:54.353 13:17:28 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:54.353 13:17:28 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.353 13:17:28 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 148527 00:05:54.353 13:17:28 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.353 13:17:28 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.353 13:17:28 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 148527' 00:05:54.353 killing process with pid 148527 00:05:54.353 13:17:28 alias_rpc -- common/autotest_common.sh@967 -- # kill 148527 00:05:54.353 13:17:28 alias_rpc -- common/autotest_common.sh@972 -- # wait 148527 00:05:56.886 00:05:56.886 real 0m4.261s 00:05:56.886 user 0m4.387s 00:05:56.886 sys 0m0.626s 00:05:56.886 13:17:31 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.886 13:17:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.886 ************************************ 00:05:56.886 END TEST alias_rpc 00:05:56.886 ************************************ 00:05:56.886 13:17:31 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.886 13:17:31 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:56.887 13:17:31 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:56.887 13:17:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.887 13:17:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.887 13:17:31 -- common/autotest_common.sh@10 -- # set +x 00:05:56.887 ************************************ 00:05:56.887 START TEST spdkcli_tcp 00:05:56.887 ************************************ 00:05:56.887 13:17:31 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:56.887 * Looking for test storage... 00:05:56.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:56.887 13:17:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:56.887 13:17:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:56.887 13:17:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:56.887 13:17:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:56.887 13:17:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:56.887 13:17:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:56.887 13:17:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:56.887 13:17:31 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:56.887 13:17:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:56.887 13:17:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=148992 00:05:56.887 13:17:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:56.887 13:17:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 148992 00:05:56.887 13:17:31 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 148992 ']' 00:05:56.887 13:17:31 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.887 13:17:31 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.887 13:17:31 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.887 13:17:31 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.887 13:17:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.145 [2024-07-13 13:17:31.720326] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:57.145 [2024-07-13 13:17:31.720473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148992 ] 00:05:57.145 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.145 [2024-07-13 13:17:31.860116] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.402 [2024-07-13 13:17:32.122928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.402 [2024-07-13 13:17:32.122937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.334 13:17:33 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.334 13:17:33 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:58.334 13:17:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=149256 00:05:58.334 13:17:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:58.334 13:17:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:58.591 [ 00:05:58.591 "bdev_malloc_delete", 00:05:58.591 "bdev_malloc_create", 00:05:58.591 "bdev_null_resize", 00:05:58.591 "bdev_null_delete", 00:05:58.591 "bdev_null_create", 00:05:58.591 "bdev_nvme_cuse_unregister", 00:05:58.591 "bdev_nvme_cuse_register", 00:05:58.591 "bdev_opal_new_user", 00:05:58.591 "bdev_opal_set_lock_state", 00:05:58.591 "bdev_opal_delete", 00:05:58.591 "bdev_opal_get_info", 00:05:58.591 "bdev_opal_create", 00:05:58.591 "bdev_nvme_opal_revert", 00:05:58.591 "bdev_nvme_opal_init", 00:05:58.591 "bdev_nvme_send_cmd", 00:05:58.591 "bdev_nvme_get_path_iostat", 00:05:58.591 "bdev_nvme_get_mdns_discovery_info", 00:05:58.591 "bdev_nvme_stop_mdns_discovery", 00:05:58.591 "bdev_nvme_start_mdns_discovery", 00:05:58.591 "bdev_nvme_set_multipath_policy", 00:05:58.591 "bdev_nvme_set_preferred_path", 00:05:58.591 "bdev_nvme_get_io_paths", 00:05:58.591 "bdev_nvme_remove_error_injection", 00:05:58.591 "bdev_nvme_add_error_injection", 00:05:58.591 "bdev_nvme_get_discovery_info", 00:05:58.591 "bdev_nvme_stop_discovery", 00:05:58.591 "bdev_nvme_start_discovery", 00:05:58.591 "bdev_nvme_get_controller_health_info", 00:05:58.591 "bdev_nvme_disable_controller", 00:05:58.591 "bdev_nvme_enable_controller", 00:05:58.591 "bdev_nvme_reset_controller", 00:05:58.591 "bdev_nvme_get_transport_statistics", 00:05:58.591 "bdev_nvme_apply_firmware", 00:05:58.591 "bdev_nvme_detach_controller", 00:05:58.591 "bdev_nvme_get_controllers", 00:05:58.591 "bdev_nvme_attach_controller", 00:05:58.591 "bdev_nvme_set_hotplug", 00:05:58.591 "bdev_nvme_set_options", 00:05:58.591 "bdev_passthru_delete", 00:05:58.591 "bdev_passthru_create", 00:05:58.591 "bdev_lvol_set_parent_bdev", 00:05:58.591 "bdev_lvol_set_parent", 00:05:58.591 "bdev_lvol_check_shallow_copy", 00:05:58.591 "bdev_lvol_start_shallow_copy", 00:05:58.591 "bdev_lvol_grow_lvstore", 00:05:58.591 "bdev_lvol_get_lvols", 00:05:58.591 "bdev_lvol_get_lvstores", 00:05:58.591 "bdev_lvol_delete", 00:05:58.591 "bdev_lvol_set_read_only", 00:05:58.591 "bdev_lvol_resize", 00:05:58.591 "bdev_lvol_decouple_parent", 00:05:58.591 "bdev_lvol_inflate", 00:05:58.591 "bdev_lvol_rename", 00:05:58.591 "bdev_lvol_clone_bdev", 00:05:58.591 "bdev_lvol_clone", 00:05:58.591 "bdev_lvol_snapshot", 00:05:58.591 "bdev_lvol_create", 00:05:58.591 "bdev_lvol_delete_lvstore", 00:05:58.591 "bdev_lvol_rename_lvstore", 00:05:58.591 "bdev_lvol_create_lvstore", 00:05:58.591 "bdev_raid_set_options", 00:05:58.591 "bdev_raid_remove_base_bdev", 00:05:58.591 "bdev_raid_add_base_bdev", 00:05:58.591 "bdev_raid_delete", 00:05:58.591 "bdev_raid_create", 00:05:58.591 "bdev_raid_get_bdevs", 00:05:58.591 "bdev_error_inject_error", 00:05:58.591 "bdev_error_delete", 00:05:58.591 "bdev_error_create", 00:05:58.591 "bdev_split_delete", 00:05:58.591 "bdev_split_create", 00:05:58.591 "bdev_delay_delete", 00:05:58.591 "bdev_delay_create", 00:05:58.591 "bdev_delay_update_latency", 00:05:58.591 "bdev_zone_block_delete", 00:05:58.591 "bdev_zone_block_create", 00:05:58.591 "blobfs_create", 00:05:58.591 "blobfs_detect", 00:05:58.591 "blobfs_set_cache_size", 00:05:58.591 "bdev_aio_delete", 00:05:58.591 "bdev_aio_rescan", 00:05:58.591 "bdev_aio_create", 00:05:58.591 "bdev_ftl_set_property", 00:05:58.591 "bdev_ftl_get_properties", 00:05:58.591 "bdev_ftl_get_stats", 00:05:58.591 "bdev_ftl_unmap", 00:05:58.591 "bdev_ftl_unload", 00:05:58.591 "bdev_ftl_delete", 00:05:58.591 "bdev_ftl_load", 00:05:58.591 "bdev_ftl_create", 00:05:58.591 "bdev_virtio_attach_controller", 00:05:58.591 "bdev_virtio_scsi_get_devices", 00:05:58.591 "bdev_virtio_detach_controller", 00:05:58.591 "bdev_virtio_blk_set_hotplug", 00:05:58.591 "bdev_iscsi_delete", 00:05:58.591 "bdev_iscsi_create", 00:05:58.591 "bdev_iscsi_set_options", 00:05:58.591 "accel_error_inject_error", 00:05:58.591 "ioat_scan_accel_module", 00:05:58.591 "dsa_scan_accel_module", 00:05:58.591 "iaa_scan_accel_module", 00:05:58.591 "keyring_file_remove_key", 00:05:58.591 "keyring_file_add_key", 00:05:58.591 "keyring_linux_set_options", 00:05:58.591 "iscsi_get_histogram", 00:05:58.591 "iscsi_enable_histogram", 00:05:58.591 "iscsi_set_options", 00:05:58.591 "iscsi_get_auth_groups", 00:05:58.591 "iscsi_auth_group_remove_secret", 00:05:58.591 "iscsi_auth_group_add_secret", 00:05:58.591 "iscsi_delete_auth_group", 00:05:58.591 "iscsi_create_auth_group", 00:05:58.591 "iscsi_set_discovery_auth", 00:05:58.591 "iscsi_get_options", 00:05:58.591 "iscsi_target_node_request_logout", 00:05:58.591 "iscsi_target_node_set_redirect", 00:05:58.591 "iscsi_target_node_set_auth", 00:05:58.591 "iscsi_target_node_add_lun", 00:05:58.591 "iscsi_get_stats", 00:05:58.591 "iscsi_get_connections", 00:05:58.591 "iscsi_portal_group_set_auth", 00:05:58.591 "iscsi_start_portal_group", 00:05:58.591 "iscsi_delete_portal_group", 00:05:58.591 "iscsi_create_portal_group", 00:05:58.591 "iscsi_get_portal_groups", 00:05:58.591 "iscsi_delete_target_node", 00:05:58.591 "iscsi_target_node_remove_pg_ig_maps", 00:05:58.592 "iscsi_target_node_add_pg_ig_maps", 00:05:58.592 "iscsi_create_target_node", 00:05:58.592 "iscsi_get_target_nodes", 00:05:58.592 "iscsi_delete_initiator_group", 00:05:58.592 "iscsi_initiator_group_remove_initiators", 00:05:58.592 "iscsi_initiator_group_add_initiators", 00:05:58.592 "iscsi_create_initiator_group", 00:05:58.592 "iscsi_get_initiator_groups", 00:05:58.592 "nvmf_set_crdt", 00:05:58.592 "nvmf_set_config", 00:05:58.592 "nvmf_set_max_subsystems", 00:05:58.592 "nvmf_stop_mdns_prr", 00:05:58.592 "nvmf_publish_mdns_prr", 00:05:58.592 "nvmf_subsystem_get_listeners", 00:05:58.592 "nvmf_subsystem_get_qpairs", 00:05:58.592 "nvmf_subsystem_get_controllers", 00:05:58.592 "nvmf_get_stats", 00:05:58.592 "nvmf_get_transports", 00:05:58.592 "nvmf_create_transport", 00:05:58.592 "nvmf_get_targets", 00:05:58.592 "nvmf_delete_target", 00:05:58.592 "nvmf_create_target", 00:05:58.592 "nvmf_subsystem_allow_any_host", 00:05:58.592 "nvmf_subsystem_remove_host", 00:05:58.592 "nvmf_subsystem_add_host", 00:05:58.592 "nvmf_ns_remove_host", 00:05:58.592 "nvmf_ns_add_host", 00:05:58.592 "nvmf_subsystem_remove_ns", 00:05:58.592 "nvmf_subsystem_add_ns", 00:05:58.592 "nvmf_subsystem_listener_set_ana_state", 00:05:58.592 "nvmf_discovery_get_referrals", 00:05:58.592 "nvmf_discovery_remove_referral", 00:05:58.592 "nvmf_discovery_add_referral", 00:05:58.592 "nvmf_subsystem_remove_listener", 00:05:58.592 "nvmf_subsystem_add_listener", 00:05:58.592 "nvmf_delete_subsystem", 00:05:58.592 "nvmf_create_subsystem", 00:05:58.592 "nvmf_get_subsystems", 00:05:58.592 "env_dpdk_get_mem_stats", 00:05:58.592 "nbd_get_disks", 00:05:58.592 "nbd_stop_disk", 00:05:58.592 "nbd_start_disk", 00:05:58.592 "ublk_recover_disk", 00:05:58.592 "ublk_get_disks", 00:05:58.592 "ublk_stop_disk", 00:05:58.592 "ublk_start_disk", 00:05:58.592 "ublk_destroy_target", 00:05:58.592 "ublk_create_target", 00:05:58.592 "virtio_blk_create_transport", 00:05:58.592 "virtio_blk_get_transports", 00:05:58.592 "vhost_controller_set_coalescing", 00:05:58.592 "vhost_get_controllers", 00:05:58.592 "vhost_delete_controller", 00:05:58.592 "vhost_create_blk_controller", 00:05:58.592 "vhost_scsi_controller_remove_target", 00:05:58.592 "vhost_scsi_controller_add_target", 00:05:58.592 "vhost_start_scsi_controller", 00:05:58.592 "vhost_create_scsi_controller", 00:05:58.592 "thread_set_cpumask", 00:05:58.592 "framework_get_governor", 00:05:58.592 "framework_get_scheduler", 00:05:58.592 "framework_set_scheduler", 00:05:58.592 "framework_get_reactors", 00:05:58.592 "thread_get_io_channels", 00:05:58.592 "thread_get_pollers", 00:05:58.592 "thread_get_stats", 00:05:58.592 "framework_monitor_context_switch", 00:05:58.592 "spdk_kill_instance", 00:05:58.592 "log_enable_timestamps", 00:05:58.592 "log_get_flags", 00:05:58.592 "log_clear_flag", 00:05:58.592 "log_set_flag", 00:05:58.592 "log_get_level", 00:05:58.592 "log_set_level", 00:05:58.592 "log_get_print_level", 00:05:58.592 "log_set_print_level", 00:05:58.592 "framework_enable_cpumask_locks", 00:05:58.592 "framework_disable_cpumask_locks", 00:05:58.592 "framework_wait_init", 00:05:58.592 "framework_start_init", 00:05:58.592 "scsi_get_devices", 00:05:58.592 "bdev_get_histogram", 00:05:58.592 "bdev_enable_histogram", 00:05:58.592 "bdev_set_qos_limit", 00:05:58.592 "bdev_set_qd_sampling_period", 00:05:58.592 "bdev_get_bdevs", 00:05:58.592 "bdev_reset_iostat", 00:05:58.592 "bdev_get_iostat", 00:05:58.592 "bdev_examine", 00:05:58.592 "bdev_wait_for_examine", 00:05:58.592 "bdev_set_options", 00:05:58.592 "notify_get_notifications", 00:05:58.592 "notify_get_types", 00:05:58.592 "accel_get_stats", 00:05:58.592 "accel_set_options", 00:05:58.592 "accel_set_driver", 00:05:58.592 "accel_crypto_key_destroy", 00:05:58.592 "accel_crypto_keys_get", 00:05:58.592 "accel_crypto_key_create", 00:05:58.592 "accel_assign_opc", 00:05:58.592 "accel_get_module_info", 00:05:58.592 "accel_get_opc_assignments", 00:05:58.592 "vmd_rescan", 00:05:58.592 "vmd_remove_device", 00:05:58.592 "vmd_enable", 00:05:58.592 "sock_get_default_impl", 00:05:58.592 "sock_set_default_impl", 00:05:58.592 "sock_impl_set_options", 00:05:58.592 "sock_impl_get_options", 00:05:58.592 "iobuf_get_stats", 00:05:58.592 "iobuf_set_options", 00:05:58.592 "framework_get_pci_devices", 00:05:58.592 "framework_get_config", 00:05:58.592 "framework_get_subsystems", 00:05:58.592 "trace_get_info", 00:05:58.592 "trace_get_tpoint_group_mask", 00:05:58.592 "trace_disable_tpoint_group", 00:05:58.592 "trace_enable_tpoint_group", 00:05:58.592 "trace_clear_tpoint_mask", 00:05:58.592 "trace_set_tpoint_mask", 00:05:58.592 "keyring_get_keys", 00:05:58.592 "spdk_get_version", 00:05:58.592 "rpc_get_methods" 00:05:58.592 ] 00:05:58.592 13:17:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:58.592 13:17:33 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.592 13:17:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.592 13:17:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:58.592 13:17:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 148992 00:05:58.592 13:17:33 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 148992 ']' 00:05:58.592 13:17:33 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 148992 00:05:58.592 13:17:33 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:58.592 13:17:33 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.592 13:17:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 148992 00:05:58.592 13:17:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.592 13:17:33 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.592 13:17:33 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 148992' 00:05:58.592 killing process with pid 148992 00:05:58.592 13:17:33 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 148992 00:05:58.592 13:17:33 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 148992 00:06:01.120 00:06:01.120 real 0m4.277s 00:06:01.120 user 0m7.551s 00:06:01.120 sys 0m0.672s 00:06:01.120 13:17:35 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.120 13:17:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.120 ************************************ 00:06:01.120 END TEST spdkcli_tcp 00:06:01.120 ************************************ 00:06:01.379 13:17:35 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.379 13:17:35 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.379 13:17:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.379 13:17:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.379 13:17:35 -- common/autotest_common.sh@10 -- # set +x 00:06:01.379 ************************************ 00:06:01.379 START TEST dpdk_mem_utility 00:06:01.379 ************************************ 00:06:01.379 13:17:35 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.379 * Looking for test storage... 00:06:01.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:01.379 13:17:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:01.379 13:17:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=149597 00:06:01.379 13:17:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.379 13:17:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 149597 00:06:01.379 13:17:35 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 149597 ']' 00:06:01.379 13:17:35 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.379 13:17:35 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.379 13:17:35 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.379 13:17:35 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.379 13:17:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.379 [2024-07-13 13:17:36.039889] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:01.379 [2024-07-13 13:17:36.040041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149597 ] 00:06:01.379 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.637 [2024-07-13 13:17:36.163671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.896 [2024-07-13 13:17:36.425767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.832 13:17:37 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.832 13:17:37 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:02.832 13:17:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:02.832 13:17:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:02.832 13:17:37 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.832 13:17:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.832 { 00:06:02.832 "filename": "/tmp/spdk_mem_dump.txt" 00:06:02.832 } 00:06:02.832 13:17:37 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.832 13:17:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:02.832 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:02.832 1 heaps totaling size 820.000000 MiB 00:06:02.832 size: 820.000000 MiB heap id: 0 00:06:02.832 end heaps---------- 00:06:02.832 8 mempools totaling size 598.116089 MiB 00:06:02.832 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:02.832 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:02.832 size: 84.521057 MiB name: bdev_io_149597 00:06:02.832 size: 51.011292 MiB name: evtpool_149597 00:06:02.832 size: 50.003479 MiB name: msgpool_149597 00:06:02.832 size: 21.763794 MiB name: PDU_Pool 00:06:02.832 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:02.832 size: 0.026123 MiB name: Session_Pool 00:06:02.832 end mempools------- 00:06:02.832 6 memzones totaling size 4.142822 MiB 00:06:02.832 size: 1.000366 MiB name: RG_ring_0_149597 00:06:02.832 size: 1.000366 MiB name: RG_ring_1_149597 00:06:02.832 size: 1.000366 MiB name: RG_ring_4_149597 00:06:02.832 size: 1.000366 MiB name: RG_ring_5_149597 00:06:02.832 size: 0.125366 MiB name: RG_ring_2_149597 00:06:02.832 size: 0.015991 MiB name: RG_ring_3_149597 00:06:02.832 end memzones------- 00:06:02.832 13:17:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:02.832 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:06:02.832 list of free elements. size: 18.514832 MiB 00:06:02.832 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:02.832 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:02.832 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:02.832 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:02.832 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:02.832 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:02.832 element at address: 0x200019600000 with size: 0.999329 MiB 00:06:02.832 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:02.832 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:02.832 element at address: 0x200018e00000 with size: 0.959900 MiB 00:06:02.832 element at address: 0x200019900040 with size: 0.937256 MiB 00:06:02.832 element at address: 0x200000200000 with size: 0.840942 MiB 00:06:02.832 element at address: 0x20001b000000 with size: 0.583191 MiB 00:06:02.832 element at address: 0x200019200000 with size: 0.491150 MiB 00:06:02.832 element at address: 0x200019a00000 with size: 0.485657 MiB 00:06:02.832 element at address: 0x200013800000 with size: 0.470581 MiB 00:06:02.832 element at address: 0x200028400000 with size: 0.411072 MiB 00:06:02.832 element at address: 0x200003a00000 with size: 0.356140 MiB 00:06:02.832 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:06:02.832 list of standard malloc elements. size: 199.220764 MiB 00:06:02.832 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:02.832 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:02.832 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:02.832 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:02.832 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:02.832 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:02.832 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:02.832 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:02.832 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:06:02.832 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:06:02.832 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:06:02.832 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:06:02.832 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:06:02.832 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:02.832 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:02.832 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:02.832 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:02.832 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:02.832 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:02.832 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:02.832 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:06:02.832 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:06:02.832 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:06:02.832 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:06:02.832 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:06:02.832 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:06:02.832 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:02.832 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:02.832 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:02.832 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:02.832 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:06:02.832 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:06:02.832 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:06:02.832 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:06:02.832 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:06:02.832 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:06:02.832 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:06:02.832 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:06:02.832 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:02.833 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:02.833 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:02.833 list of memzone associated elements. size: 602.264404 MiB 00:06:02.833 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:02.833 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:02.833 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:02.833 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:02.833 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:02.833 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_149597_0 00:06:02.833 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:02.833 associated memzone info: size: 48.002930 MiB name: MP_evtpool_149597_0 00:06:02.833 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:02.833 associated memzone info: size: 48.002930 MiB name: MP_msgpool_149597_0 00:06:02.833 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:02.833 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:02.833 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:02.833 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:02.833 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:02.833 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_149597 00:06:02.833 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:02.833 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_149597 00:06:02.833 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:02.833 associated memzone info: size: 1.007996 MiB name: MP_evtpool_149597 00:06:02.833 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:02.833 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:02.833 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:02.833 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:02.833 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:02.833 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:02.833 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:02.833 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:02.833 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:02.833 associated memzone info: size: 1.000366 MiB name: RG_ring_0_149597 00:06:02.833 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:02.833 associated memzone info: size: 1.000366 MiB name: RG_ring_1_149597 00:06:02.833 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:02.833 associated memzone info: size: 1.000366 MiB name: RG_ring_4_149597 00:06:02.833 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:02.833 associated memzone info: size: 1.000366 MiB name: RG_ring_5_149597 00:06:02.833 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:02.833 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_149597 00:06:02.833 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:06:02.833 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:02.833 element at address: 0x200013878780 with size: 0.500549 MiB 00:06:02.833 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:02.833 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:06:02.833 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:02.833 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:02.833 associated memzone info: size: 0.125366 MiB name: RG_ring_2_149597 00:06:02.833 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:06:02.833 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:02.833 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:06:02.833 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:02.833 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:02.833 associated memzone info: size: 0.015991 MiB name: RG_ring_3_149597 00:06:02.833 element at address: 0x20002846f540 with size: 0.002502 MiB 00:06:02.833 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:02.833 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:06:02.833 associated memzone info: size: 0.000183 MiB name: MP_msgpool_149597 00:06:02.833 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:02.833 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_149597 00:06:02.833 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:06:02.833 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:02.833 13:17:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:02.833 13:17:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 149597 00:06:02.833 13:17:37 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 149597 ']' 00:06:02.833 13:17:37 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 149597 00:06:02.833 13:17:37 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:02.833 13:17:37 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.833 13:17:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 149597 00:06:02.833 13:17:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.833 13:17:37 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.833 13:17:37 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 149597' 00:06:02.833 killing process with pid 149597 00:06:02.833 13:17:37 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 149597 00:06:02.833 13:17:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 149597 00:06:05.357 00:06:05.357 real 0m4.132s 00:06:05.357 user 0m4.121s 00:06:05.357 sys 0m0.599s 00:06:05.357 13:17:40 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.357 13:17:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.357 ************************************ 00:06:05.357 END TEST dpdk_mem_utility 00:06:05.357 ************************************ 00:06:05.357 13:17:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.357 13:17:40 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:05.357 13:17:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.357 13:17:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.357 13:17:40 -- common/autotest_common.sh@10 -- # set +x 00:06:05.357 ************************************ 00:06:05.357 START TEST event 00:06:05.357 ************************************ 00:06:05.357 13:17:40 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:05.615 * Looking for test storage... 00:06:05.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:05.615 13:17:40 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:05.615 13:17:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:05.615 13:17:40 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:05.615 13:17:40 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:05.615 13:17:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.615 13:17:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.615 ************************************ 00:06:05.615 START TEST event_perf 00:06:05.615 ************************************ 00:06:05.615 13:17:40 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:05.615 Running I/O for 1 seconds...[2024-07-13 13:17:40.187599] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:05.615 [2024-07-13 13:17:40.187705] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150178 ] 00:06:05.615 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.615 [2024-07-13 13:17:40.315071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.873 [2024-07-13 13:17:40.578938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.873 [2024-07-13 13:17:40.578992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.873 [2024-07-13 13:17:40.579038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.873 [2024-07-13 13:17:40.579048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.770 Running I/O for 1 seconds... 00:06:07.770 lcore 0: 199071 00:06:07.770 lcore 1: 199072 00:06:07.770 lcore 2: 199072 00:06:07.770 lcore 3: 199072 00:06:07.770 done. 00:06:07.770 00:06:07.770 real 0m1.888s 00:06:07.770 user 0m4.701s 00:06:07.770 sys 0m0.170s 00:06:07.770 13:17:42 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.770 13:17:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.770 ************************************ 00:06:07.770 END TEST event_perf 00:06:07.770 ************************************ 00:06:07.770 13:17:42 event -- common/autotest_common.sh@1142 -- # return 0 00:06:07.770 13:17:42 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:07.770 13:17:42 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:07.770 13:17:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.770 13:17:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.770 ************************************ 00:06:07.770 START TEST event_reactor 00:06:07.770 ************************************ 00:06:07.770 13:17:42 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:07.770 [2024-07-13 13:17:42.123189] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:07.771 [2024-07-13 13:17:42.123327] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150466 ] 00:06:07.771 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.771 [2024-07-13 13:17:42.270231] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.061 [2024-07-13 13:17:42.537065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.445 test_start 00:06:09.445 oneshot 00:06:09.445 tick 100 00:06:09.445 tick 100 00:06:09.445 tick 250 00:06:09.445 tick 100 00:06:09.445 tick 100 00:06:09.445 tick 100 00:06:09.445 tick 250 00:06:09.445 tick 500 00:06:09.445 tick 100 00:06:09.445 tick 100 00:06:09.445 tick 250 00:06:09.445 tick 100 00:06:09.445 tick 100 00:06:09.445 test_end 00:06:09.445 00:06:09.445 real 0m1.904s 00:06:09.445 user 0m1.724s 00:06:09.445 sys 0m0.169s 00:06:09.445 13:17:43 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.445 13:17:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:09.445 ************************************ 00:06:09.445 END TEST event_reactor 00:06:09.445 ************************************ 00:06:09.445 13:17:44 event -- common/autotest_common.sh@1142 -- # return 0 00:06:09.445 13:17:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.445 13:17:44 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:09.445 13:17:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.445 13:17:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.445 ************************************ 00:06:09.445 START TEST event_reactor_perf 00:06:09.445 ************************************ 00:06:09.445 13:17:44 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.445 [2024-07-13 13:17:44.077464] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:09.445 [2024-07-13 13:17:44.077586] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150633 ] 00:06:09.445 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.703 [2024-07-13 13:17:44.211591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.961 [2024-07-13 13:17:44.474465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.333 test_start 00:06:11.333 test_end 00:06:11.333 Performance: 261035 events per second 00:06:11.333 00:06:11.333 real 0m1.889s 00:06:11.333 user 0m1.716s 00:06:11.333 sys 0m0.162s 00:06:11.333 13:17:45 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.333 13:17:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.333 ************************************ 00:06:11.333 END TEST event_reactor_perf 00:06:11.333 ************************************ 00:06:11.333 13:17:45 event -- common/autotest_common.sh@1142 -- # return 0 00:06:11.333 13:17:45 event -- event/event.sh@49 -- # uname -s 00:06:11.333 13:17:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:11.333 13:17:45 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:11.333 13:17:45 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.333 13:17:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.333 13:17:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.333 ************************************ 00:06:11.333 START TEST event_scheduler 00:06:11.333 ************************************ 00:06:11.333 13:17:45 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:11.333 * Looking for test storage... 00:06:11.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:11.333 13:17:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:11.333 13:17:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=150938 00:06:11.333 13:17:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:11.333 13:17:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.333 13:17:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 150938 00:06:11.333 13:17:46 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 150938 ']' 00:06:11.333 13:17:46 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.333 13:17:46 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.333 13:17:46 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.333 13:17:46 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.333 13:17:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.591 [2024-07-13 13:17:46.120152] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:11.591 [2024-07-13 13:17:46.120318] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150938 ] 00:06:11.591 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.591 [2024-07-13 13:17:46.251038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.849 [2024-07-13 13:17:46.473529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.849 [2024-07-13 13:17:46.473586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.849 [2024-07-13 13:17:46.473637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.849 [2024-07-13 13:17:46.473643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.416 13:17:47 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.416 13:17:47 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:12.416 13:17:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:12.416 13:17:47 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.416 13:17:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.416 [2024-07-13 13:17:47.008251] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:12.416 [2024-07-13 13:17:47.008298] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:12.416 [2024-07-13 13:17:47.008330] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:12.416 [2024-07-13 13:17:47.008353] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:12.416 [2024-07-13 13:17:47.008370] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:12.416 13:17:47 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.416 13:17:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:12.416 13:17:47 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.416 13:17:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.674 [2024-07-13 13:17:47.308369] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:12.674 13:17:47 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.674 13:17:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:12.674 13:17:47 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.674 13:17:47 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.674 13:17:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.674 ************************************ 00:06:12.674 START TEST scheduler_create_thread 00:06:12.674 ************************************ 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.674 2 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.674 3 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.674 4 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.674 5 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.674 6 00:06:12.674 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.675 7 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.675 8 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.675 9 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.675 10 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.675 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.932 00:06:12.932 real 0m0.112s 00:06:12.932 user 0m0.008s 00:06:12.932 sys 0m0.005s 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.932 13:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.932 ************************************ 00:06:12.932 END TEST scheduler_create_thread 00:06:12.932 ************************************ 00:06:12.932 13:17:47 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:12.932 13:17:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:12.932 13:17:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 150938 00:06:12.932 13:17:47 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 150938 ']' 00:06:12.932 13:17:47 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 150938 00:06:12.932 13:17:47 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:12.932 13:17:47 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.932 13:17:47 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 150938 00:06:12.932 13:17:47 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:12.932 13:17:47 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:12.932 13:17:47 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 150938' 00:06:12.932 killing process with pid 150938 00:06:12.932 13:17:47 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 150938 00:06:12.932 13:17:47 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 150938 00:06:13.498 [2024-07-13 13:17:47.936053] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:14.434 00:06:14.434 real 0m3.047s 00:06:14.434 user 0m4.785s 00:06:14.434 sys 0m0.471s 00:06:14.434 13:17:49 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.434 13:17:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.434 ************************************ 00:06:14.434 END TEST event_scheduler 00:06:14.434 ************************************ 00:06:14.434 13:17:49 event -- common/autotest_common.sh@1142 -- # return 0 00:06:14.434 13:17:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:14.434 13:17:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:14.434 13:17:49 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.434 13:17:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.434 13:17:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.434 ************************************ 00:06:14.434 START TEST app_repeat 00:06:14.434 ************************************ 00:06:14.434 13:17:49 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:14.434 13:17:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.434 13:17:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.434 13:17:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:14.434 13:17:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.434 13:17:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:14.434 13:17:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:14.434 13:17:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:14.434 13:17:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=151390 00:06:14.434 13:17:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:14.434 13:17:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.434 13:17:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 151390' 00:06:14.434 Process app_repeat pid: 151390 00:06:14.434 13:17:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.434 13:17:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:14.434 spdk_app_start Round 0 00:06:14.434 13:17:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 151390 /var/tmp/spdk-nbd.sock 00:06:14.434 13:17:49 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 151390 ']' 00:06:14.434 13:17:49 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.434 13:17:49 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.434 13:17:49 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.434 13:17:49 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.434 13:17:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.434 [2024-07-13 13:17:49.140962] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:14.434 [2024-07-13 13:17:49.141136] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151390 ] 00:06:14.693 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.693 [2024-07-13 13:17:49.275245] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.951 [2024-07-13 13:17:49.534405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.951 [2024-07-13 13:17:49.534412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.517 13:17:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.517 13:17:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:15.517 13:17:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.774 Malloc0 00:06:15.774 13:17:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.031 Malloc1 00:06:16.031 13:17:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.031 13:17:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.031 13:17:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.031 13:17:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.031 13:17:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.031 13:17:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.031 13:17:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.031 13:17:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.031 13:17:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.031 13:17:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.031 13:17:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.031 13:17:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.031 13:17:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:16.031 13:17:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.031 13:17:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.031 13:17:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.288 /dev/nbd0 00:06:16.288 13:17:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.288 13:17:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.288 13:17:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:16.288 13:17:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:16.288 13:17:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:16.288 13:17:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:16.288 13:17:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:16.288 13:17:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:16.288 13:17:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:16.288 13:17:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:16.289 13:17:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.289 1+0 records in 00:06:16.289 1+0 records out 00:06:16.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214386 s, 19.1 MB/s 00:06:16.289 13:17:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.547 13:17:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:16.547 13:17:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.547 13:17:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:16.547 13:17:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:16.547 13:17:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.547 13:17:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.547 13:17:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.547 /dev/nbd1 00:06:16.547 13:17:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.547 13:17:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.547 13:17:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:16.547 13:17:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:16.547 13:17:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:16.547 13:17:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:16.547 13:17:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:16.547 13:17:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:16.547 13:17:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:16.547 13:17:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:16.547 13:17:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.805 1+0 records in 00:06:16.805 1+0 records out 00:06:16.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236121 s, 17.3 MB/s 00:06:16.805 13:17:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.805 13:17:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:16.805 13:17:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.805 13:17:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:16.805 13:17:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:16.805 13:17:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.805 13:17:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.805 13:17:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.805 13:17:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.805 13:17:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.064 { 00:06:17.064 "nbd_device": "/dev/nbd0", 00:06:17.064 "bdev_name": "Malloc0" 00:06:17.064 }, 00:06:17.064 { 00:06:17.064 "nbd_device": "/dev/nbd1", 00:06:17.064 "bdev_name": "Malloc1" 00:06:17.064 } 00:06:17.064 ]' 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.064 { 00:06:17.064 "nbd_device": "/dev/nbd0", 00:06:17.064 "bdev_name": "Malloc0" 00:06:17.064 }, 00:06:17.064 { 00:06:17.064 "nbd_device": "/dev/nbd1", 00:06:17.064 "bdev_name": "Malloc1" 00:06:17.064 } 00:06:17.064 ]' 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.064 /dev/nbd1' 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.064 /dev/nbd1' 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.064 256+0 records in 00:06:17.064 256+0 records out 00:06:17.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504288 s, 208 MB/s 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.064 13:17:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.064 256+0 records in 00:06:17.064 256+0 records out 00:06:17.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236557 s, 44.3 MB/s 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.065 256+0 records in 00:06:17.065 256+0 records out 00:06:17.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029743 s, 35.3 MB/s 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.065 13:17:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.324 13:17:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.324 13:17:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.324 13:17:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.324 13:17:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.324 13:17:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.324 13:17:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.324 13:17:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.324 13:17:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.324 13:17:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.324 13:17:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.582 13:17:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.582 13:17:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.582 13:17:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.582 13:17:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.582 13:17:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.582 13:17:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.582 13:17:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.582 13:17:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.582 13:17:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.583 13:17:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.583 13:17:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.840 13:17:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.840 13:17:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.840 13:17:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.840 13:17:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.840 13:17:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.840 13:17:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.840 13:17:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.840 13:17:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.840 13:17:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.840 13:17:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.840 13:17:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.840 13:17:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.840 13:17:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.406 13:17:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:19.777 [2024-07-13 13:17:54.400572] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.035 [2024-07-13 13:17:54.655403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.035 [2024-07-13 13:17:54.655406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.293 [2024-07-13 13:17:54.875344] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.293 [2024-07-13 13:17:54.875432] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.227 13:17:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:21.227 13:17:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:21.227 spdk_app_start Round 1 00:06:21.227 13:17:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 151390 /var/tmp/spdk-nbd.sock 00:06:21.227 13:17:55 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 151390 ']' 00:06:21.227 13:17:55 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.227 13:17:55 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.227 13:17:55 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.227 13:17:55 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.227 13:17:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.485 13:17:56 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.485 13:17:56 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:21.485 13:17:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.055 Malloc0 00:06:22.055 13:17:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.351 Malloc1 00:06:22.351 13:17:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.351 13:17:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.351 13:17:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.351 13:17:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.351 13:17:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.351 13:17:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.351 13:17:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.351 13:17:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.351 13:17:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.351 13:17:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.352 13:17:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.352 13:17:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.352 13:17:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.352 13:17:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.352 13:17:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.352 13:17:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:22.613 /dev/nbd0 00:06:22.613 13:17:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.614 13:17:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.614 13:17:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:22.614 13:17:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:22.614 13:17:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:22.614 13:17:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:22.614 13:17:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:22.614 13:17:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:22.614 13:17:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:22.614 13:17:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:22.614 13:17:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.614 1+0 records in 00:06:22.614 1+0 records out 00:06:22.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186625 s, 21.9 MB/s 00:06:22.614 13:17:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.614 13:17:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:22.614 13:17:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.614 13:17:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:22.614 13:17:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:22.614 13:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.614 13:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.614 13:17:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.872 /dev/nbd1 00:06:22.872 13:17:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.872 13:17:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.872 13:17:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:22.872 13:17:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:22.872 13:17:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:22.872 13:17:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:22.872 13:17:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:22.872 13:17:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:22.872 13:17:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:22.872 13:17:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:22.872 13:17:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.872 1+0 records in 00:06:22.872 1+0 records out 00:06:22.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177248 s, 23.1 MB/s 00:06:22.872 13:17:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.872 13:17:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:22.872 13:17:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.872 13:17:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:22.872 13:17:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:22.872 13:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.872 13:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.872 13:17:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.872 13:17:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.872 13:17:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.130 { 00:06:23.130 "nbd_device": "/dev/nbd0", 00:06:23.130 "bdev_name": "Malloc0" 00:06:23.130 }, 00:06:23.130 { 00:06:23.130 "nbd_device": "/dev/nbd1", 00:06:23.130 "bdev_name": "Malloc1" 00:06:23.130 } 00:06:23.130 ]' 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.130 { 00:06:23.130 "nbd_device": "/dev/nbd0", 00:06:23.130 "bdev_name": "Malloc0" 00:06:23.130 }, 00:06:23.130 { 00:06:23.130 "nbd_device": "/dev/nbd1", 00:06:23.130 "bdev_name": "Malloc1" 00:06:23.130 } 00:06:23.130 ]' 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.130 /dev/nbd1' 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.130 /dev/nbd1' 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.130 256+0 records in 00:06:23.130 256+0 records out 00:06:23.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483825 s, 217 MB/s 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.130 256+0 records in 00:06:23.130 256+0 records out 00:06:23.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272138 s, 38.5 MB/s 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.130 256+0 records in 00:06:23.130 256+0 records out 00:06:23.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314688 s, 33.3 MB/s 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.130 13:17:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.387 13:17:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.387 13:17:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.387 13:17:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.387 13:17:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.387 13:17:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.387 13:17:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.387 13:17:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.387 13:17:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.387 13:17:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.387 13:17:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.645 13:17:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.645 13:17:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.645 13:17:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.645 13:17:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.645 13:17:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.645 13:17:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.645 13:17:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.645 13:17:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.645 13:17:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.645 13:17:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.645 13:17:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.903 13:17:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.903 13:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.903 13:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.903 13:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.903 13:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.903 13:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.903 13:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.903 13:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.903 13:17:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.903 13:17:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.903 13:17:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.903 13:17:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.903 13:17:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.468 13:17:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.837 [2024-07-13 13:18:00.453144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.094 [2024-07-13 13:18:00.707452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.095 [2024-07-13 13:18:00.707456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.352 [2024-07-13 13:18:00.917920] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.352 [2024-07-13 13:18:00.917995] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.282 13:18:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:27.282 13:18:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:27.282 spdk_app_start Round 2 00:06:27.282 13:18:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 151390 /var/tmp/spdk-nbd.sock 00:06:27.282 13:18:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 151390 ']' 00:06:27.282 13:18:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.282 13:18:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.282 13:18:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.282 13:18:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.282 13:18:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.539 13:18:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.539 13:18:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:27.539 13:18:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.106 Malloc0 00:06:28.106 13:18:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.363 Malloc1 00:06:28.363 13:18:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.363 13:18:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.363 13:18:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.363 13:18:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.363 13:18:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.363 13:18:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.363 13:18:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.363 13:18:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.363 13:18:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.363 13:18:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.363 13:18:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.363 13:18:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.363 13:18:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.363 13:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.363 13:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.363 13:18:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.620 /dev/nbd0 00:06:28.620 13:18:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.620 13:18:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.620 13:18:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:28.620 13:18:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:28.620 13:18:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:28.620 13:18:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:28.620 13:18:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:28.620 13:18:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:28.620 13:18:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:28.620 13:18:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:28.620 13:18:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.620 1+0 records in 00:06:28.620 1+0 records out 00:06:28.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193534 s, 21.2 MB/s 00:06:28.620 13:18:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.620 13:18:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:28.620 13:18:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.620 13:18:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:28.620 13:18:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:28.620 13:18:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.620 13:18:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.620 13:18:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.877 /dev/nbd1 00:06:28.877 13:18:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.877 13:18:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.877 13:18:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:28.877 13:18:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:28.877 13:18:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:28.877 13:18:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:28.877 13:18:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:28.877 13:18:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:28.877 13:18:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:28.877 13:18:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:28.877 13:18:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.877 1+0 records in 00:06:28.877 1+0 records out 00:06:28.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296481 s, 13.8 MB/s 00:06:28.877 13:18:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.877 13:18:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:28.877 13:18:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.877 13:18:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:28.877 13:18:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:28.877 13:18:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.877 13:18:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.877 13:18:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.877 13:18:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.877 13:18:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.134 { 00:06:29.134 "nbd_device": "/dev/nbd0", 00:06:29.134 "bdev_name": "Malloc0" 00:06:29.134 }, 00:06:29.134 { 00:06:29.134 "nbd_device": "/dev/nbd1", 00:06:29.134 "bdev_name": "Malloc1" 00:06:29.134 } 00:06:29.134 ]' 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.134 { 00:06:29.134 "nbd_device": "/dev/nbd0", 00:06:29.134 "bdev_name": "Malloc0" 00:06:29.134 }, 00:06:29.134 { 00:06:29.134 "nbd_device": "/dev/nbd1", 00:06:29.134 "bdev_name": "Malloc1" 00:06:29.134 } 00:06:29.134 ]' 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.134 /dev/nbd1' 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.134 /dev/nbd1' 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.134 13:18:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:29.135 256+0 records in 00:06:29.135 256+0 records out 00:06:29.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00484427 s, 216 MB/s 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.135 256+0 records in 00:06:29.135 256+0 records out 00:06:29.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253916 s, 41.3 MB/s 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.135 256+0 records in 00:06:29.135 256+0 records out 00:06:29.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300909 s, 34.8 MB/s 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.135 13:18:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.700 13:18:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.700 13:18:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.700 13:18:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.700 13:18:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.700 13:18:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.700 13:18:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.700 13:18:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.700 13:18:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.700 13:18:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.700 13:18:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.700 13:18:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.700 13:18:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.700 13:18:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.700 13:18:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.700 13:18:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.700 13:18:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.957 13:18:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.957 13:18:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.957 13:18:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.957 13:18:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.957 13:18:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.957 13:18:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.957 13:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.957 13:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.214 13:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.214 13:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.214 13:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.214 13:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:30.214 13:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.214 13:18:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.214 13:18:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.214 13:18:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.214 13:18:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.214 13:18:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:30.472 13:18:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:31.846 [2024-07-13 13:18:06.561208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.105 [2024-07-13 13:18:06.817067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.105 [2024-07-13 13:18:06.817069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.362 [2024-07-13 13:18:07.043171] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.362 [2024-07-13 13:18:07.043263] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.737 13:18:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 151390 /var/tmp/spdk-nbd.sock 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 151390 ']' 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:33.737 13:18:08 event.app_repeat -- event/event.sh@39 -- # killprocess 151390 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 151390 ']' 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 151390 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 151390 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 151390' 00:06:33.737 killing process with pid 151390 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@967 -- # kill 151390 00:06:33.737 13:18:08 event.app_repeat -- common/autotest_common.sh@972 -- # wait 151390 00:06:35.111 spdk_app_start is called in Round 0. 00:06:35.111 Shutdown signal received, stop current app iteration 00:06:35.111 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:35.111 spdk_app_start is called in Round 1. 00:06:35.111 Shutdown signal received, stop current app iteration 00:06:35.111 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:35.111 spdk_app_start is called in Round 2. 00:06:35.111 Shutdown signal received, stop current app iteration 00:06:35.111 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:35.111 spdk_app_start is called in Round 3. 00:06:35.111 Shutdown signal received, stop current app iteration 00:06:35.111 13:18:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:35.111 13:18:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:35.111 00:06:35.111 real 0m20.603s 00:06:35.111 user 0m42.160s 00:06:35.111 sys 0m3.429s 00:06:35.111 13:18:09 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.111 13:18:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.111 ************************************ 00:06:35.111 END TEST app_repeat 00:06:35.111 ************************************ 00:06:35.111 13:18:09 event -- common/autotest_common.sh@1142 -- # return 0 00:06:35.111 13:18:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:35.111 13:18:09 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:35.111 13:18:09 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.111 13:18:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.111 13:18:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.111 ************************************ 00:06:35.111 START TEST cpu_locks 00:06:35.111 ************************************ 00:06:35.111 13:18:09 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:35.111 * Looking for test storage... 00:06:35.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:35.111 13:18:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:35.111 13:18:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:35.111 13:18:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:35.111 13:18:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:35.111 13:18:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.111 13:18:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.111 13:18:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.111 ************************************ 00:06:35.111 START TEST default_locks 00:06:35.111 ************************************ 00:06:35.111 13:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:35.111 13:18:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=154627 00:06:35.111 13:18:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.111 13:18:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 154627 00:06:35.111 13:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 154627 ']' 00:06:35.111 13:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.111 13:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.111 13:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.111 13:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.111 13:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.369 [2024-07-13 13:18:09.928297] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:35.369 [2024-07-13 13:18:09.928480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154627 ] 00:06:35.369 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.369 [2024-07-13 13:18:10.077573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.627 [2024-07-13 13:18:10.337724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.574 13:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.574 13:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:36.574 13:18:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 154627 00:06:36.574 13:18:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 154627 00:06:36.574 13:18:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.832 lslocks: write error 00:06:36.832 13:18:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 154627 00:06:36.832 13:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 154627 ']' 00:06:36.832 13:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 154627 00:06:36.832 13:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:36.832 13:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.832 13:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 154627 00:06:36.832 13:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.832 13:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.832 13:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 154627' 00:06:36.832 killing process with pid 154627 00:06:36.832 13:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 154627 00:06:36.832 13:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 154627 00:06:39.359 13:18:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 154627 00:06:39.359 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:39.359 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 154627 00:06:39.359 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:39.359 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 154627 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 154627 ']' 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (154627) - No such process 00:06:39.360 ERROR: process (pid: 154627) is no longer running 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.360 00:06:39.360 real 0m4.259s 00:06:39.360 user 0m4.241s 00:06:39.360 sys 0m0.758s 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.360 13:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.360 ************************************ 00:06:39.360 END TEST default_locks 00:06:39.360 ************************************ 00:06:39.617 13:18:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:39.617 13:18:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:39.617 13:18:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.618 13:18:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.618 13:18:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.618 ************************************ 00:06:39.618 START TEST default_locks_via_rpc 00:06:39.618 ************************************ 00:06:39.618 13:18:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:39.618 13:18:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=155186 00:06:39.618 13:18:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.618 13:18:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 155186 00:06:39.618 13:18:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 155186 ']' 00:06:39.618 13:18:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.618 13:18:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.618 13:18:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.618 13:18:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.618 13:18:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.618 [2024-07-13 13:18:14.229410] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:39.618 [2024-07-13 13:18:14.229576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155186 ] 00:06:39.618 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.618 [2024-07-13 13:18:14.362688] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.183 [2024-07-13 13:18:14.622975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 155186 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 155186 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 155186 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 155186 ']' 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 155186 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 155186 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 155186' 00:06:41.116 killing process with pid 155186 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 155186 00:06:41.116 13:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 155186 00:06:43.649 00:06:43.649 real 0m4.250s 00:06:43.649 user 0m4.202s 00:06:43.649 sys 0m0.746s 00:06:43.649 13:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.649 13:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.649 ************************************ 00:06:43.649 END TEST default_locks_via_rpc 00:06:43.649 ************************************ 00:06:43.907 13:18:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:43.907 13:18:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:43.907 13:18:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.907 13:18:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.907 13:18:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.907 ************************************ 00:06:43.907 START TEST non_locking_app_on_locked_coremask 00:06:43.907 ************************************ 00:06:43.907 13:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:43.907 13:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=155752 00:06:43.907 13:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.907 13:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 155752 /var/tmp/spdk.sock 00:06:43.907 13:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 155752 ']' 00:06:43.907 13:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.907 13:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.907 13:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.907 13:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.907 13:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.907 [2024-07-13 13:18:18.530135] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:43.908 [2024-07-13 13:18:18.530308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155752 ] 00:06:43.908 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.166 [2024-07-13 13:18:18.653940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.166 [2024-07-13 13:18:18.906437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.101 13:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.101 13:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:45.101 13:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=155888 00:06:45.101 13:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:45.101 13:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 155888 /var/tmp/spdk2.sock 00:06:45.101 13:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 155888 ']' 00:06:45.101 13:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.101 13:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.101 13:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.101 13:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.101 13:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.360 [2024-07-13 13:18:19.879268] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:45.360 [2024-07-13 13:18:19.879408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155888 ] 00:06:45.360 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.360 [2024-07-13 13:18:20.067389] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.360 [2024-07-13 13:18:20.067469] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.927 [2024-07-13 13:18:20.594349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.828 13:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.828 13:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:47.828 13:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 155752 00:06:47.828 13:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 155752 00:06:47.828 13:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.394 lslocks: write error 00:06:48.394 13:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 155752 00:06:48.394 13:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 155752 ']' 00:06:48.394 13:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 155752 00:06:48.394 13:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:48.394 13:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.394 13:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 155752 00:06:48.394 13:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.394 13:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.394 13:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 155752' 00:06:48.394 killing process with pid 155752 00:06:48.394 13:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 155752 00:06:48.394 13:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 155752 00:06:53.653 13:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 155888 00:06:53.653 13:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 155888 ']' 00:06:53.653 13:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 155888 00:06:53.653 13:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:53.653 13:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.653 13:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 155888 00:06:53.653 13:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.653 13:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.653 13:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 155888' 00:06:53.653 killing process with pid 155888 00:06:53.653 13:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 155888 00:06:53.653 13:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 155888 00:06:56.181 00:06:56.181 real 0m12.256s 00:06:56.181 user 0m12.541s 00:06:56.181 sys 0m1.462s 00:06:56.181 13:18:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.181 13:18:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.181 ************************************ 00:06:56.181 END TEST non_locking_app_on_locked_coremask 00:06:56.181 ************************************ 00:06:56.181 13:18:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:56.181 13:18:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:56.181 13:18:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.181 13:18:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.181 13:18:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.181 ************************************ 00:06:56.181 START TEST locking_app_on_unlocked_coremask 00:06:56.181 ************************************ 00:06:56.181 13:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:56.181 13:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=157248 00:06:56.181 13:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:56.181 13:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 157248 /var/tmp/spdk.sock 00:06:56.181 13:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 157248 ']' 00:06:56.181 13:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.181 13:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.181 13:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.181 13:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.181 13:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.181 [2024-07-13 13:18:30.839355] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:56.181 [2024-07-13 13:18:30.839526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157248 ] 00:06:56.181 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.466 [2024-07-13 13:18:30.975448] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.466 [2024-07-13 13:18:30.975527] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.724 [2024-07-13 13:18:31.240957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.657 13:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.657 13:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:57.657 13:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=157404 00:06:57.657 13:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:57.657 13:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 157404 /var/tmp/spdk2.sock 00:06:57.657 13:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 157404 ']' 00:06:57.657 13:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.657 13:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.657 13:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.657 13:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.657 13:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.657 [2024-07-13 13:18:32.235075] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:57.657 [2024-07-13 13:18:32.235231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157404 ] 00:06:57.657 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.915 [2024-07-13 13:18:32.424997] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.480 [2024-07-13 13:18:32.950169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.376 13:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.376 13:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:00.376 13:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 157404 00:07:00.376 13:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 157404 00:07:00.376 13:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.940 lslocks: write error 00:07:00.940 13:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 157248 00:07:00.940 13:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 157248 ']' 00:07:00.940 13:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 157248 00:07:00.940 13:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:00.940 13:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.940 13:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 157248 00:07:00.940 13:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.940 13:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.940 13:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 157248' 00:07:00.940 killing process with pid 157248 00:07:00.940 13:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 157248 00:07:00.940 13:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 157248 00:07:06.276 13:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 157404 00:07:06.276 13:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 157404 ']' 00:07:06.276 13:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 157404 00:07:06.276 13:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:06.276 13:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.276 13:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 157404 00:07:06.276 13:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.276 13:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.276 13:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 157404' 00:07:06.276 killing process with pid 157404 00:07:06.276 13:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 157404 00:07:06.276 13:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 157404 00:07:08.801 00:07:08.801 real 0m12.350s 00:07:08.801 user 0m12.694s 00:07:08.801 sys 0m1.485s 00:07:08.801 13:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.801 13:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.801 ************************************ 00:07:08.801 END TEST locking_app_on_unlocked_coremask 00:07:08.801 ************************************ 00:07:08.801 13:18:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:08.801 13:18:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:08.801 13:18:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.801 13:18:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.801 13:18:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.801 ************************************ 00:07:08.801 START TEST locking_app_on_locked_coremask 00:07:08.801 ************************************ 00:07:08.801 13:18:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:08.801 13:18:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=158763 00:07:08.801 13:18:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.801 13:18:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 158763 /var/tmp/spdk.sock 00:07:08.801 13:18:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 158763 ']' 00:07:08.801 13:18:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.802 13:18:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.802 13:18:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.802 13:18:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.802 13:18:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.802 [2024-07-13 13:18:43.237283] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:08.802 [2024-07-13 13:18:43.237451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158763 ] 00:07:08.802 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.802 [2024-07-13 13:18:43.369620] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.059 [2024-07-13 13:18:43.629399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=158905 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 158905 /var/tmp/spdk2.sock 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 158905 /var/tmp/spdk2.sock 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 158905 /var/tmp/spdk2.sock 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 158905 ']' 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.992 13:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.992 [2024-07-13 13:18:44.624006] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:09.992 [2024-07-13 13:18:44.624167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158905 ] 00:07:09.992 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.250 [2024-07-13 13:18:44.814334] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 158763 has claimed it. 00:07:10.250 [2024-07-13 13:18:44.814423] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:10.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (158905) - No such process 00:07:10.814 ERROR: process (pid: 158905) is no longer running 00:07:10.814 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.814 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:10.814 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:10.814 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.814 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.814 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.814 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 158763 00:07:10.814 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 158763 00:07:10.814 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.073 lslocks: write error 00:07:11.073 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 158763 00:07:11.073 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 158763 ']' 00:07:11.073 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 158763 00:07:11.073 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:11.073 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.073 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 158763 00:07:11.073 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.073 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.073 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 158763' 00:07:11.073 killing process with pid 158763 00:07:11.073 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 158763 00:07:11.073 13:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 158763 00:07:13.598 00:07:13.598 real 0m5.055s 00:07:13.598 user 0m5.242s 00:07:13.598 sys 0m0.998s 00:07:13.598 13:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.598 13:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.598 ************************************ 00:07:13.598 END TEST locking_app_on_locked_coremask 00:07:13.598 ************************************ 00:07:13.598 13:18:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:13.598 13:18:48 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:13.598 13:18:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.598 13:18:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.598 13:18:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.598 ************************************ 00:07:13.598 START TEST locking_overlapped_coremask 00:07:13.598 ************************************ 00:07:13.598 13:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:13.598 13:18:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=159339 00:07:13.598 13:18:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:13.598 13:18:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 159339 /var/tmp/spdk.sock 00:07:13.598 13:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 159339 ']' 00:07:13.599 13:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.599 13:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.599 13:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.599 13:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.599 13:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.856 [2024-07-13 13:18:48.348550] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:13.857 [2024-07-13 13:18:48.348718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159339 ] 00:07:13.857 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.857 [2024-07-13 13:18:48.483336] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.115 [2024-07-13 13:18:48.749624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.115 [2024-07-13 13:18:48.749670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.115 [2024-07-13 13:18:48.749680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.048 13:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=159477 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 159477 /var/tmp/spdk2.sock 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 159477 /var/tmp/spdk2.sock 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 159477 /var/tmp/spdk2.sock 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 159477 ']' 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.049 13:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.049 [2024-07-13 13:18:49.646694] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:15.049 [2024-07-13 13:18:49.646882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159477 ] 00:07:15.049 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.334 [2024-07-13 13:18:49.833761] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 159339 has claimed it. 00:07:15.334 [2024-07-13 13:18:49.833862] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:15.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (159477) - No such process 00:07:15.591 ERROR: process (pid: 159477) is no longer running 00:07:15.591 13:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.591 13:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:15.591 13:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:15.591 13:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.591 13:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:15.591 13:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.591 13:18:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:15.591 13:18:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:15.591 13:18:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:15.591 13:18:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:15.591 13:18:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 159339 00:07:15.591 13:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 159339 ']' 00:07:15.591 13:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 159339 00:07:15.591 13:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:15.591 13:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.591 13:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 159339 00:07:15.849 13:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.849 13:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.849 13:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 159339' 00:07:15.849 killing process with pid 159339 00:07:15.849 13:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 159339 00:07:15.849 13:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 159339 00:07:18.375 00:07:18.375 real 0m4.390s 00:07:18.375 user 0m11.200s 00:07:18.375 sys 0m0.828s 00:07:18.375 13:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.375 13:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.375 ************************************ 00:07:18.375 END TEST locking_overlapped_coremask 00:07:18.375 ************************************ 00:07:18.375 13:18:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:18.376 13:18:52 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:18.376 13:18:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.376 13:18:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.376 13:18:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.376 ************************************ 00:07:18.376 START TEST locking_overlapped_coremask_via_rpc 00:07:18.376 ************************************ 00:07:18.376 13:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:18.376 13:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=159908 00:07:18.376 13:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:18.376 13:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 159908 /var/tmp/spdk.sock 00:07:18.376 13:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 159908 ']' 00:07:18.376 13:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.376 13:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.376 13:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.376 13:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.376 13:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.376 [2024-07-13 13:18:52.789778] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:18.376 [2024-07-13 13:18:52.789951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159908 ] 00:07:18.376 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.376 [2024-07-13 13:18:52.924163] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.376 [2024-07-13 13:18:52.924213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.633 [2024-07-13 13:18:53.188343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.633 [2024-07-13 13:18:53.188394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.633 [2024-07-13 13:18:53.188399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.565 13:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.565 13:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:19.565 13:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=160054 00:07:19.565 13:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:19.565 13:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 160054 /var/tmp/spdk2.sock 00:07:19.565 13:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 160054 ']' 00:07:19.565 13:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.565 13:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.565 13:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.565 13:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.565 13:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.565 [2024-07-13 13:18:54.130486] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:19.565 [2024-07-13 13:18:54.130635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160054 ] 00:07:19.565 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.823 [2024-07-13 13:18:54.314265] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:19.823 [2024-07-13 13:18:54.314334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.080 [2024-07-13 13:18:54.783043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.080 [2024-07-13 13:18:54.783089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.080 [2024-07-13 13:18:54.783093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.605 [2024-07-13 13:18:56.850054] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 159908 has claimed it. 00:07:22.605 request: 00:07:22.605 { 00:07:22.605 "method": "framework_enable_cpumask_locks", 00:07:22.605 "req_id": 1 00:07:22.605 } 00:07:22.605 Got JSON-RPC error response 00:07:22.605 response: 00:07:22.605 { 00:07:22.605 "code": -32603, 00:07:22.605 "message": "Failed to claim CPU core: 2" 00:07:22.605 } 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:22.605 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.606 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:22.606 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.606 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 159908 /var/tmp/spdk.sock 00:07:22.606 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 159908 ']' 00:07:22.606 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.606 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.606 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.606 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.606 13:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.606 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.606 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:22.606 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 160054 /var/tmp/spdk2.sock 00:07:22.606 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 160054 ']' 00:07:22.606 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.606 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.606 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.606 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.606 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.864 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.864 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:22.864 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:22.864 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:22.864 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:22.864 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:22.864 00:07:22.864 real 0m4.664s 00:07:22.864 user 0m1.479s 00:07:22.864 sys 0m0.259s 00:07:22.864 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.864 13:18:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.864 ************************************ 00:07:22.864 END TEST locking_overlapped_coremask_via_rpc 00:07:22.864 ************************************ 00:07:22.864 13:18:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:22.864 13:18:57 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:22.864 13:18:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 159908 ]] 00:07:22.864 13:18:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 159908 00:07:22.864 13:18:57 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 159908 ']' 00:07:22.864 13:18:57 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 159908 00:07:22.864 13:18:57 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:22.864 13:18:57 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.864 13:18:57 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 159908 00:07:22.864 13:18:57 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:22.864 13:18:57 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:22.864 13:18:57 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 159908' 00:07:22.864 killing process with pid 159908 00:07:22.864 13:18:57 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 159908 00:07:22.864 13:18:57 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 159908 00:07:25.391 13:18:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 160054 ]] 00:07:25.391 13:18:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 160054 00:07:25.391 13:18:59 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 160054 ']' 00:07:25.391 13:18:59 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 160054 00:07:25.391 13:18:59 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:25.391 13:18:59 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.391 13:18:59 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 160054 00:07:25.391 13:18:59 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:25.391 13:18:59 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:25.391 13:18:59 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 160054' 00:07:25.391 killing process with pid 160054 00:07:25.391 13:18:59 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 160054 00:07:25.391 13:18:59 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 160054 00:07:27.289 13:19:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:27.289 13:19:01 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:27.289 13:19:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 159908 ]] 00:07:27.289 13:19:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 159908 00:07:27.289 13:19:01 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 159908 ']' 00:07:27.289 13:19:01 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 159908 00:07:27.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (159908) - No such process 00:07:27.289 13:19:01 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 159908 is not found' 00:07:27.289 Process with pid 159908 is not found 00:07:27.289 13:19:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 160054 ]] 00:07:27.289 13:19:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 160054 00:07:27.289 13:19:01 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 160054 ']' 00:07:27.289 13:19:01 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 160054 00:07:27.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (160054) - No such process 00:07:27.289 13:19:01 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 160054 is not found' 00:07:27.289 Process with pid 160054 is not found 00:07:27.289 13:19:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:27.289 00:07:27.289 real 0m52.214s 00:07:27.289 user 1m26.265s 00:07:27.289 sys 0m7.813s 00:07:27.289 13:19:01 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.289 13:19:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.289 ************************************ 00:07:27.289 END TEST cpu_locks 00:07:27.289 ************************************ 00:07:27.289 13:19:01 event -- common/autotest_common.sh@1142 -- # return 0 00:07:27.289 00:07:27.289 real 1m21.910s 00:07:27.289 user 2m21.490s 00:07:27.289 sys 0m12.460s 00:07:27.289 13:19:01 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.289 13:19:01 event -- common/autotest_common.sh@10 -- # set +x 00:07:27.289 ************************************ 00:07:27.289 END TEST event 00:07:27.289 ************************************ 00:07:27.289 13:19:02 -- common/autotest_common.sh@1142 -- # return 0 00:07:27.289 13:19:02 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:27.289 13:19:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.289 13:19:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.289 13:19:02 -- common/autotest_common.sh@10 -- # set +x 00:07:27.289 ************************************ 00:07:27.289 START TEST thread 00:07:27.289 ************************************ 00:07:27.289 13:19:02 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:27.547 * Looking for test storage... 00:07:27.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:27.547 13:19:02 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:27.547 13:19:02 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:27.547 13:19:02 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.547 13:19:02 thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.547 ************************************ 00:07:27.547 START TEST thread_poller_perf 00:07:27.547 ************************************ 00:07:27.547 13:19:02 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:27.547 [2024-07-13 13:19:02.131595] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:27.547 [2024-07-13 13:19:02.131734] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161084 ] 00:07:27.547 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.547 [2024-07-13 13:19:02.266100] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.805 [2024-07-13 13:19:02.520422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.805 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:29.701 ====================================== 00:07:29.701 busy:2719265459 (cyc) 00:07:29.701 total_run_count: 289000 00:07:29.701 tsc_hz: 2700000000 (cyc) 00:07:29.701 ====================================== 00:07:29.701 poller_cost: 9409 (cyc), 3484 (nsec) 00:07:29.701 00:07:29.701 real 0m1.880s 00:07:29.701 user 0m1.708s 00:07:29.701 sys 0m0.163s 00:07:29.701 13:19:03 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.701 13:19:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:29.701 ************************************ 00:07:29.701 END TEST thread_poller_perf 00:07:29.701 ************************************ 00:07:29.701 13:19:03 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:29.701 13:19:03 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:29.701 13:19:03 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:29.701 13:19:03 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.701 13:19:03 thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.701 ************************************ 00:07:29.701 START TEST thread_poller_perf 00:07:29.701 ************************************ 00:07:29.701 13:19:04 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:29.701 [2024-07-13 13:19:04.057874] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:29.701 [2024-07-13 13:19:04.058017] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161365 ] 00:07:29.701 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.701 [2024-07-13 13:19:04.202482] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.959 [2024-07-13 13:19:04.459529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.959 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:31.329 ====================================== 00:07:31.329 busy:2705298359 (cyc) 00:07:31.329 total_run_count: 3665000 00:07:31.329 tsc_hz: 2700000000 (cyc) 00:07:31.329 ====================================== 00:07:31.329 poller_cost: 738 (cyc), 273 (nsec) 00:07:31.329 00:07:31.329 real 0m1.882s 00:07:31.329 user 0m1.697s 00:07:31.329 sys 0m0.176s 00:07:31.329 13:19:05 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.329 13:19:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:31.329 ************************************ 00:07:31.329 END TEST thread_poller_perf 00:07:31.329 ************************************ 00:07:31.329 13:19:05 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:31.329 13:19:05 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:31.329 00:07:31.329 real 0m3.899s 00:07:31.329 user 0m3.457s 00:07:31.329 sys 0m0.434s 00:07:31.329 13:19:05 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.329 13:19:05 thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.329 ************************************ 00:07:31.329 END TEST thread 00:07:31.329 ************************************ 00:07:31.329 13:19:05 -- common/autotest_common.sh@1142 -- # return 0 00:07:31.330 13:19:05 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:31.330 13:19:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.330 13:19:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.330 13:19:05 -- common/autotest_common.sh@10 -- # set +x 00:07:31.330 ************************************ 00:07:31.330 START TEST accel 00:07:31.330 ************************************ 00:07:31.330 13:19:05 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:31.330 * Looking for test storage... 00:07:31.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:31.330 13:19:06 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:31.330 13:19:06 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:31.330 13:19:06 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:31.330 13:19:06 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=161685 00:07:31.330 13:19:06 accel -- accel/accel.sh@63 -- # waitforlisten 161685 00:07:31.330 13:19:06 accel -- common/autotest_common.sh@829 -- # '[' -z 161685 ']' 00:07:31.330 13:19:06 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.330 13:19:06 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:31.330 13:19:06 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:31.330 13:19:06 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.330 13:19:06 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.330 13:19:06 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.330 13:19:06 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.330 13:19:06 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.330 13:19:06 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.330 13:19:06 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.330 13:19:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.330 13:19:06 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.330 13:19:06 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:31.330 13:19:06 accel -- accel/accel.sh@41 -- # jq -r . 00:07:31.587 [2024-07-13 13:19:06.116726] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:31.588 [2024-07-13 13:19:06.116893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161685 ] 00:07:31.588 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.588 [2024-07-13 13:19:06.241337] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.845 [2024-07-13 13:19:06.496633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.783 13:19:07 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.783 13:19:07 accel -- common/autotest_common.sh@862 -- # return 0 00:07:32.783 13:19:07 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:32.783 13:19:07 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:32.783 13:19:07 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:32.783 13:19:07 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:32.783 13:19:07 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:32.783 13:19:07 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:32.783 13:19:07 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.783 13:19:07 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:32.783 13:19:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.783 13:19:07 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.783 13:19:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.783 13:19:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.783 13:19:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.783 13:19:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.783 13:19:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.783 13:19:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.783 13:19:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.783 13:19:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.783 13:19:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.783 13:19:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.783 13:19:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.783 13:19:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.783 13:19:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.783 13:19:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.783 13:19:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.783 13:19:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.783 13:19:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.783 13:19:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.783 13:19:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.783 13:19:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.783 13:19:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.783 13:19:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.783 13:19:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.783 13:19:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.784 13:19:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.784 13:19:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.784 13:19:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.784 13:19:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.784 13:19:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.784 13:19:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.784 13:19:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.784 13:19:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.784 13:19:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.784 13:19:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.784 13:19:07 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.784 13:19:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.784 13:19:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.784 13:19:07 accel -- accel/accel.sh@75 -- # killprocess 161685 00:07:32.784 13:19:07 accel -- common/autotest_common.sh@948 -- # '[' -z 161685 ']' 00:07:32.784 13:19:07 accel -- common/autotest_common.sh@952 -- # kill -0 161685 00:07:32.784 13:19:07 accel -- common/autotest_common.sh@953 -- # uname 00:07:32.784 13:19:07 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.784 13:19:07 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 161685 00:07:32.784 13:19:07 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:32.784 13:19:07 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:32.784 13:19:07 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 161685' 00:07:32.784 killing process with pid 161685 00:07:32.784 13:19:07 accel -- common/autotest_common.sh@967 -- # kill 161685 00:07:32.784 13:19:07 accel -- common/autotest_common.sh@972 -- # wait 161685 00:07:35.361 13:19:09 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:35.361 13:19:09 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:35.361 13:19:09 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.361 13:19:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.361 13:19:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.361 13:19:09 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:35.361 13:19:09 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:35.361 13:19:09 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:35.361 13:19:09 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.361 13:19:09 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.361 13:19:09 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.361 13:19:09 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.361 13:19:09 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.361 13:19:09 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:35.361 13:19:09 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:35.361 13:19:10 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.361 13:19:10 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:35.361 13:19:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.361 13:19:10 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:35.361 13:19:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:35.361 13:19:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.361 13:19:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.361 ************************************ 00:07:35.361 START TEST accel_missing_filename 00:07:35.361 ************************************ 00:07:35.361 13:19:10 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:35.361 13:19:10 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:35.361 13:19:10 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:35.361 13:19:10 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:35.361 13:19:10 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.362 13:19:10 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:35.362 13:19:10 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.362 13:19:10 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:35.362 13:19:10 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:35.362 13:19:10 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:35.362 13:19:10 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.362 13:19:10 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.362 13:19:10 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.362 13:19:10 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.362 13:19:10 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.362 13:19:10 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:35.362 13:19:10 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:35.619 [2024-07-13 13:19:10.119654] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:35.619 [2024-07-13 13:19:10.119832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162170 ] 00:07:35.619 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.619 [2024-07-13 13:19:10.253911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.877 [2024-07-13 13:19:10.508587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.135 [2024-07-13 13:19:10.742760] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.700 [2024-07-13 13:19:11.305437] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:37.265 A filename is required. 00:07:37.265 13:19:11 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:37.265 13:19:11 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:37.265 13:19:11 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:37.265 13:19:11 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:37.265 13:19:11 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:37.265 13:19:11 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:37.265 00:07:37.265 real 0m1.693s 00:07:37.265 user 0m1.486s 00:07:37.265 sys 0m0.234s 00:07:37.265 13:19:11 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.265 13:19:11 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:37.265 ************************************ 00:07:37.265 END TEST accel_missing_filename 00:07:37.265 ************************************ 00:07:37.265 13:19:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.265 13:19:11 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.265 13:19:11 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:37.265 13:19:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.265 13:19:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.265 ************************************ 00:07:37.265 START TEST accel_compress_verify 00:07:37.265 ************************************ 00:07:37.265 13:19:11 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.265 13:19:11 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:37.265 13:19:11 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.265 13:19:11 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:37.265 13:19:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.265 13:19:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:37.265 13:19:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.265 13:19:11 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.265 13:19:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.265 13:19:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:37.265 13:19:11 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.265 13:19:11 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.265 13:19:11 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.265 13:19:11 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.265 13:19:11 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.265 13:19:11 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:37.265 13:19:11 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:37.265 [2024-07-13 13:19:11.861568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:37.265 [2024-07-13 13:19:11.861717] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162410 ] 00:07:37.265 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.265 [2024-07-13 13:19:12.005921] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.523 [2024-07-13 13:19:12.267929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.781 [2024-07-13 13:19:12.503616] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.345 [2024-07-13 13:19:13.064835] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:38.910 00:07:38.910 Compression does not support the verify option, aborting. 00:07:38.910 13:19:13 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:38.910 13:19:13 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:38.910 13:19:13 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:38.910 13:19:13 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:38.910 13:19:13 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:38.910 13:19:13 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:38.910 00:07:38.910 real 0m1.705s 00:07:38.910 user 0m1.469s 00:07:38.910 sys 0m0.265s 00:07:38.910 13:19:13 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.910 13:19:13 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:38.910 ************************************ 00:07:38.910 END TEST accel_compress_verify 00:07:38.910 ************************************ 00:07:38.910 13:19:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.910 13:19:13 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:38.910 13:19:13 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:38.910 13:19:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.910 13:19:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.910 ************************************ 00:07:38.910 START TEST accel_wrong_workload 00:07:38.910 ************************************ 00:07:38.910 13:19:13 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:38.910 13:19:13 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:38.910 13:19:13 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:38.910 13:19:13 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:38.910 13:19:13 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.910 13:19:13 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:38.910 13:19:13 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.910 13:19:13 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:38.910 13:19:13 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:38.910 13:19:13 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:38.910 13:19:13 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.910 13:19:13 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.910 13:19:13 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.910 13:19:13 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.910 13:19:13 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.910 13:19:13 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:38.910 13:19:13 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:38.910 Unsupported workload type: foobar 00:07:38.910 [2024-07-13 13:19:13.608117] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:38.910 accel_perf options: 00:07:38.910 [-h help message] 00:07:38.910 [-q queue depth per core] 00:07:38.910 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:38.910 [-T number of threads per core 00:07:38.910 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:38.910 [-t time in seconds] 00:07:38.910 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:38.910 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:38.910 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:38.910 [-l for compress/decompress workloads, name of uncompressed input file 00:07:38.910 [-S for crc32c workload, use this seed value (default 0) 00:07:38.910 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:38.910 [-f for fill workload, use this BYTE value (default 255) 00:07:38.910 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:38.910 [-y verify result if this switch is on] 00:07:38.910 [-a tasks to allocate per core (default: same value as -q)] 00:07:38.910 Can be used to spread operations across a wider range of memory. 00:07:38.910 13:19:13 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:38.910 13:19:13 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:38.910 13:19:13 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:38.910 13:19:13 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:38.910 00:07:38.910 real 0m0.057s 00:07:38.910 user 0m0.058s 00:07:38.910 sys 0m0.036s 00:07:38.910 13:19:13 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.910 13:19:13 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:38.910 ************************************ 00:07:38.910 END TEST accel_wrong_workload 00:07:38.910 ************************************ 00:07:38.910 13:19:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.910 13:19:13 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:38.910 13:19:13 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:38.910 13:19:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.910 13:19:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.168 ************************************ 00:07:39.168 START TEST accel_negative_buffers 00:07:39.168 ************************************ 00:07:39.168 13:19:13 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:39.168 13:19:13 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:39.168 13:19:13 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:39.168 13:19:13 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:39.168 13:19:13 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.168 13:19:13 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:39.168 13:19:13 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.168 13:19:13 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:39.168 13:19:13 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:39.168 13:19:13 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:39.168 13:19:13 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.168 13:19:13 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.168 13:19:13 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.168 13:19:13 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.168 13:19:13 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.168 13:19:13 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:39.168 13:19:13 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:39.168 -x option must be non-negative. 00:07:39.168 [2024-07-13 13:19:13.707044] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:39.168 accel_perf options: 00:07:39.168 [-h help message] 00:07:39.168 [-q queue depth per core] 00:07:39.168 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:39.168 [-T number of threads per core 00:07:39.168 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:39.168 [-t time in seconds] 00:07:39.168 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:39.168 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:39.168 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:39.168 [-l for compress/decompress workloads, name of uncompressed input file 00:07:39.168 [-S for crc32c workload, use this seed value (default 0) 00:07:39.168 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:39.168 [-f for fill workload, use this BYTE value (default 255) 00:07:39.168 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:39.168 [-y verify result if this switch is on] 00:07:39.168 [-a tasks to allocate per core (default: same value as -q)] 00:07:39.168 Can be used to spread operations across a wider range of memory. 00:07:39.168 13:19:13 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:39.168 13:19:13 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:39.168 13:19:13 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:39.168 13:19:13 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:39.168 00:07:39.168 real 0m0.056s 00:07:39.168 user 0m0.065s 00:07:39.168 sys 0m0.028s 00:07:39.168 13:19:13 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.168 13:19:13 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:39.168 ************************************ 00:07:39.168 END TEST accel_negative_buffers 00:07:39.168 ************************************ 00:07:39.168 13:19:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.168 13:19:13 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:39.168 13:19:13 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:39.168 13:19:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.168 13:19:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.168 ************************************ 00:07:39.168 START TEST accel_crc32c 00:07:39.168 ************************************ 00:07:39.168 13:19:13 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:39.168 13:19:13 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:39.168 13:19:13 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:39.168 13:19:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.168 13:19:13 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:39.168 13:19:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.168 13:19:13 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:39.168 13:19:13 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:39.168 13:19:13 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.168 13:19:13 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.168 13:19:13 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.168 13:19:13 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.168 13:19:13 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.168 13:19:13 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:39.168 13:19:13 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:39.168 [2024-07-13 13:19:13.803480] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:39.168 [2024-07-13 13:19:13.803597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162729 ] 00:07:39.168 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.425 [2024-07-13 13:19:13.932373] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.683 [2024-07-13 13:19:14.177389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.683 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.684 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.684 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.684 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:39.684 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.684 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.684 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.684 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.684 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.684 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.684 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.684 13:19:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.684 13:19:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.684 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.684 13:19:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:42.209 13:19:16 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.209 00:07:42.209 real 0m2.669s 00:07:42.209 user 0m0.012s 00:07:42.209 sys 0m0.001s 00:07:42.209 13:19:16 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.209 13:19:16 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:42.209 ************************************ 00:07:42.210 END TEST accel_crc32c 00:07:42.210 ************************************ 00:07:42.210 13:19:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.210 13:19:16 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:42.210 13:19:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:42.210 13:19:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.210 13:19:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.210 ************************************ 00:07:42.210 START TEST accel_crc32c_C2 00:07:42.210 ************************************ 00:07:42.210 13:19:16 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:42.210 13:19:16 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:42.210 13:19:16 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:42.210 13:19:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.210 13:19:16 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:42.210 13:19:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.210 13:19:16 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:42.210 13:19:16 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.210 13:19:16 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.210 13:19:16 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.210 13:19:16 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.210 13:19:16 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.210 13:19:16 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.210 13:19:16 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:42.210 13:19:16 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:42.210 [2024-07-13 13:19:16.519812] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:42.210 [2024-07-13 13:19:16.519970] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163024 ] 00:07:42.210 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.210 [2024-07-13 13:19:16.652535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.210 [2024-07-13 13:19:16.913715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.468 13:19:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.996 00:07:44.996 real 0m2.694s 00:07:44.996 user 0m2.457s 00:07:44.996 sys 0m0.235s 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.996 13:19:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:44.996 ************************************ 00:07:44.996 END TEST accel_crc32c_C2 00:07:44.996 ************************************ 00:07:44.996 13:19:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.996 13:19:19 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:44.996 13:19:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:44.996 13:19:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.996 13:19:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.996 ************************************ 00:07:44.996 START TEST accel_copy 00:07:44.996 ************************************ 00:07:44.996 13:19:19 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:44.996 13:19:19 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:44.996 13:19:19 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:44.996 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.996 13:19:19 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:44.996 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.996 13:19:19 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:44.996 13:19:19 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:44.996 13:19:19 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.996 13:19:19 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.996 13:19:19 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.996 13:19:19 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.996 13:19:19 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.996 13:19:19 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:44.996 13:19:19 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:44.996 [2024-07-13 13:19:19.264739] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:44.996 [2024-07-13 13:19:19.264961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163417 ] 00:07:44.996 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.996 [2024-07-13 13:19:19.408974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.996 [2024-07-13 13:19:19.671081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.255 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.256 13:19:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:47.783 13:19:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.783 00:07:47.783 real 0m2.710s 00:07:47.783 user 0m2.447s 00:07:47.783 sys 0m0.260s 00:07:47.783 13:19:21 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.783 13:19:21 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:47.783 ************************************ 00:07:47.783 END TEST accel_copy 00:07:47.783 ************************************ 00:07:47.783 13:19:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.783 13:19:21 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:47.783 13:19:21 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:47.783 13:19:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.783 13:19:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.783 ************************************ 00:07:47.783 START TEST accel_fill 00:07:47.783 ************************************ 00:07:47.783 13:19:21 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:47.783 13:19:21 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:47.783 13:19:21 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:47.783 13:19:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.783 13:19:21 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:47.783 13:19:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.783 13:19:21 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:47.783 13:19:21 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:47.783 13:19:21 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.783 13:19:21 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.783 13:19:21 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.783 13:19:21 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.783 13:19:21 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.783 13:19:21 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:47.783 13:19:21 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:47.783 [2024-07-13 13:19:22.017021] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:47.783 [2024-07-13 13:19:22.017169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163721 ] 00:07:47.783 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.783 [2024-07-13 13:19:22.144365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.783 [2024-07-13 13:19:22.404037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:48.042 13:19:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:49.940 13:19:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.940 00:07:49.940 real 0m2.681s 00:07:49.940 user 0m2.442s 00:07:49.940 sys 0m0.237s 00:07:49.940 13:19:24 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.940 13:19:24 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:49.940 ************************************ 00:07:49.940 END TEST accel_fill 00:07:49.940 ************************************ 00:07:49.940 13:19:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.940 13:19:24 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:49.940 13:19:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:49.940 13:19:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.940 13:19:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.203 ************************************ 00:07:50.203 START TEST accel_copy_crc32c 00:07:50.203 ************************************ 00:07:50.203 13:19:24 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:50.203 13:19:24 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:50.203 13:19:24 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:50.203 13:19:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.203 13:19:24 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:50.203 13:19:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.203 13:19:24 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:50.203 13:19:24 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:50.203 13:19:24 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.203 13:19:24 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.203 13:19:24 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.203 13:19:24 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.203 13:19:24 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.203 13:19:24 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:50.203 13:19:24 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:50.203 [2024-07-13 13:19:24.742589] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:50.203 [2024-07-13 13:19:24.742707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164016 ] 00:07:50.203 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.203 [2024-07-13 13:19:24.868910] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.513 [2024-07-13 13:19:25.115195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:50.772 13:19:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.673 00:07:52.673 real 0m2.673s 00:07:52.673 user 0m0.012s 00:07:52.673 sys 0m0.001s 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.673 13:19:27 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:52.673 ************************************ 00:07:52.673 END TEST accel_copy_crc32c 00:07:52.673 ************************************ 00:07:52.673 13:19:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.673 13:19:27 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:52.673 13:19:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:52.673 13:19:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.673 13:19:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.931 ************************************ 00:07:52.931 START TEST accel_copy_crc32c_C2 00:07:52.931 ************************************ 00:07:52.931 13:19:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:52.931 13:19:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.931 13:19:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:52.931 13:19:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.931 13:19:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:52.931 13:19:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.931 13:19:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:52.931 13:19:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.931 13:19:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.931 13:19:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.931 13:19:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.931 13:19:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.931 13:19:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.931 13:19:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:52.931 13:19:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:52.931 [2024-07-13 13:19:27.462765] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:52.932 [2024-07-13 13:19:27.462902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164429 ] 00:07:52.932 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.932 [2024-07-13 13:19:27.590967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.190 [2024-07-13 13:19:27.854000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.448 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:53.449 13:19:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.973 00:07:55.973 real 0m2.715s 00:07:55.973 user 0m0.011s 00:07:55.973 sys 0m0.003s 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.973 13:19:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:55.973 ************************************ 00:07:55.973 END TEST accel_copy_crc32c_C2 00:07:55.973 ************************************ 00:07:55.973 13:19:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:55.973 13:19:30 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:55.973 13:19:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:55.973 13:19:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.973 13:19:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.973 ************************************ 00:07:55.973 START TEST accel_dualcast 00:07:55.973 ************************************ 00:07:55.973 13:19:30 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:55.973 13:19:30 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:55.973 13:19:30 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:55.973 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.973 13:19:30 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:55.973 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.973 13:19:30 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:55.973 13:19:30 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:55.973 13:19:30 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.973 13:19:30 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.973 13:19:30 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.973 13:19:30 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.973 13:19:30 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.973 13:19:30 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:55.973 13:19:30 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:55.974 [2024-07-13 13:19:30.222920] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:55.974 [2024-07-13 13:19:30.223134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164715 ] 00:07:55.974 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.974 [2024-07-13 13:19:30.367481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.974 [2024-07-13 13:19:30.630314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.231 13:19:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.232 13:19:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:58.756 13:19:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.756 00:07:58.756 real 0m2.713s 00:07:58.756 user 0m0.012s 00:07:58.756 sys 0m0.000s 00:07:58.756 13:19:32 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.757 13:19:32 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:58.757 ************************************ 00:07:58.757 END TEST accel_dualcast 00:07:58.757 ************************************ 00:07:58.757 13:19:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:58.757 13:19:32 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:58.757 13:19:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:58.757 13:19:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.757 13:19:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.757 ************************************ 00:07:58.757 START TEST accel_compare 00:07:58.757 ************************************ 00:07:58.757 13:19:32 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:58.757 13:19:32 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:58.757 13:19:32 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:58.757 13:19:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.757 13:19:32 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:58.757 13:19:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.757 13:19:32 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:58.757 13:19:32 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:58.757 13:19:32 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.757 13:19:32 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.757 13:19:32 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.757 13:19:32 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.757 13:19:32 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.757 13:19:32 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:58.757 13:19:32 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:58.757 [2024-07-13 13:19:32.980353] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:58.757 [2024-07-13 13:19:32.980498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165126 ] 00:07:58.757 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.757 [2024-07-13 13:19:33.126055] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.757 [2024-07-13 13:19:33.387396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.014 13:19:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:00.911 13:19:35 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.911 00:08:00.911 real 0m2.714s 00:08:00.911 user 0m2.452s 00:08:00.911 sys 0m0.260s 00:08:00.911 13:19:35 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.911 13:19:35 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:00.911 ************************************ 00:08:00.911 END TEST accel_compare 00:08:00.911 ************************************ 00:08:01.170 13:19:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:01.170 13:19:35 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:01.170 13:19:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:01.170 13:19:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.170 13:19:35 accel -- common/autotest_common.sh@10 -- # set +x 00:08:01.170 ************************************ 00:08:01.170 START TEST accel_xor 00:08:01.170 ************************************ 00:08:01.170 13:19:35 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:01.170 13:19:35 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:01.170 13:19:35 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:01.170 13:19:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.170 13:19:35 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:01.170 13:19:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.170 13:19:35 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:01.170 13:19:35 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:01.170 13:19:35 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:01.170 13:19:35 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:01.170 13:19:35 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.170 13:19:35 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.171 13:19:35 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:01.171 13:19:35 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:01.171 13:19:35 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:01.171 [2024-07-13 13:19:35.738768] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:01.171 [2024-07-13 13:19:35.738913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165417 ] 00:08:01.171 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.171 [2024-07-13 13:19:35.866204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.429 [2024-07-13 13:19:36.111932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.687 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.687 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.687 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.687 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.687 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.687 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.687 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.688 13:19:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.216 00:08:04.216 real 0m2.669s 00:08:04.216 user 0m0.013s 00:08:04.216 sys 0m0.000s 00:08:04.216 13:19:38 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.216 13:19:38 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:04.216 ************************************ 00:08:04.216 END TEST accel_xor 00:08:04.216 ************************************ 00:08:04.216 13:19:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:04.216 13:19:38 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:04.216 13:19:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:04.216 13:19:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.216 13:19:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.216 ************************************ 00:08:04.216 START TEST accel_xor 00:08:04.216 ************************************ 00:08:04.216 13:19:38 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.216 13:19:38 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.217 13:19:38 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.217 13:19:38 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.217 13:19:38 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:04.217 13:19:38 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:04.217 [2024-07-13 13:19:38.453278] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:04.217 [2024-07-13 13:19:38.453403] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165799 ] 00:08:04.217 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.217 [2024-07-13 13:19:38.586472] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.217 [2024-07-13 13:19:38.847531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.475 13:19:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.374 13:19:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:06.375 13:19:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.375 00:08:06.375 real 0m2.693s 00:08:06.375 user 0m0.012s 00:08:06.375 sys 0m0.001s 00:08:06.375 13:19:41 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.375 13:19:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:06.375 ************************************ 00:08:06.375 END TEST accel_xor 00:08:06.375 ************************************ 00:08:06.633 13:19:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:06.633 13:19:41 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:06.633 13:19:41 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:06.633 13:19:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.633 13:19:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.633 ************************************ 00:08:06.633 START TEST accel_dif_verify 00:08:06.633 ************************************ 00:08:06.633 13:19:41 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:06.633 13:19:41 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:06.633 13:19:41 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:06.633 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:06.633 13:19:41 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:06.633 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:06.633 13:19:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:06.633 13:19:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:06.633 13:19:41 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.633 13:19:41 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.633 13:19:41 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.633 13:19:41 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.633 13:19:41 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.633 13:19:41 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:06.633 13:19:41 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:06.633 [2024-07-13 13:19:41.201721] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:06.633 [2024-07-13 13:19:41.201860] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166116 ] 00:08:06.633 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.633 [2024-07-13 13:19:41.331813] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.891 [2024-07-13 13:19:41.593193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.149 13:19:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:09.682 13:19:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.682 00:08:09.682 real 0m2.696s 00:08:09.682 user 0m0.012s 00:08:09.682 sys 0m0.002s 00:08:09.682 13:19:43 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.682 13:19:43 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:09.682 ************************************ 00:08:09.682 END TEST accel_dif_verify 00:08:09.682 ************************************ 00:08:09.682 13:19:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.682 13:19:43 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:09.682 13:19:43 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:09.682 13:19:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.682 13:19:43 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.682 ************************************ 00:08:09.682 START TEST accel_dif_generate 00:08:09.682 ************************************ 00:08:09.682 13:19:43 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:09.682 13:19:43 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:09.682 13:19:43 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:09.682 13:19:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.682 13:19:43 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:09.682 13:19:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.682 13:19:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:09.682 13:19:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:09.682 13:19:43 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.682 13:19:43 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.682 13:19:43 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.682 13:19:43 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.682 13:19:43 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.682 13:19:43 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:09.682 13:19:43 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:09.682 [2024-07-13 13:19:43.944185] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:09.682 [2024-07-13 13:19:43.944305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166408 ] 00:08:09.682 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.682 [2024-07-13 13:19:44.078492] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.682 [2024-07-13 13:19:44.338245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:09.940 13:19:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:12.462 13:19:46 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:12.462 00:08:12.462 real 0m2.692s 00:08:12.462 user 0m2.462s 00:08:12.462 sys 0m0.228s 00:08:12.462 13:19:46 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.462 13:19:46 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:12.462 ************************************ 00:08:12.462 END TEST accel_dif_generate 00:08:12.462 ************************************ 00:08:12.462 13:19:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:12.462 13:19:46 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:12.462 13:19:46 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:12.462 13:19:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.462 13:19:46 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.462 ************************************ 00:08:12.462 START TEST accel_dif_generate_copy 00:08:12.462 ************************************ 00:08:12.462 13:19:46 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:12.462 13:19:46 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:12.462 13:19:46 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:12.462 13:19:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.462 13:19:46 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:12.462 13:19:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.462 13:19:46 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:12.462 13:19:46 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:12.462 13:19:46 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.462 13:19:46 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.462 13:19:46 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.462 13:19:46 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.462 13:19:46 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.462 13:19:46 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:12.462 13:19:46 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:12.462 [2024-07-13 13:19:46.685550] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:12.462 [2024-07-13 13:19:46.685669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166819 ] 00:08:12.462 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.462 [2024-07-13 13:19:46.815509] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.462 [2024-07-13 13:19:47.081097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.720 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.721 13:19:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.621 00:08:14.621 real 0m2.699s 00:08:14.621 user 0m0.012s 00:08:14.621 sys 0m0.001s 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.621 13:19:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:14.621 ************************************ 00:08:14.621 END TEST accel_dif_generate_copy 00:08:14.621 ************************************ 00:08:14.621 13:19:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:14.880 13:19:49 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:14.880 13:19:49 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:14.880 13:19:49 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:14.880 13:19:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.880 13:19:49 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.880 ************************************ 00:08:14.880 START TEST accel_comp 00:08:14.880 ************************************ 00:08:14.880 13:19:49 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:14.880 13:19:49 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:14.880 13:19:49 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:14.880 13:19:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:14.880 13:19:49 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:14.880 13:19:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:14.880 13:19:49 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:14.880 13:19:49 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:14.880 13:19:49 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.880 13:19:49 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.880 13:19:49 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.880 13:19:49 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.880 13:19:49 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.880 13:19:49 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:14.880 13:19:49 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:14.880 [2024-07-13 13:19:49.438242] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:14.880 [2024-07-13 13:19:49.438365] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167104 ] 00:08:14.880 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.880 [2024-07-13 13:19:49.568768] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.139 [2024-07-13 13:19:49.827758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.397 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.397 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.397 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.397 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.397 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.397 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.397 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.397 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.397 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.397 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.397 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.397 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.397 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.398 13:19:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:17.929 13:19:52 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.929 00:08:17.929 real 0m2.702s 00:08:17.929 user 0m2.459s 00:08:17.929 sys 0m0.242s 00:08:17.929 13:19:52 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.929 13:19:52 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:17.929 ************************************ 00:08:17.929 END TEST accel_comp 00:08:17.929 ************************************ 00:08:17.929 13:19:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:17.929 13:19:52 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:17.929 13:19:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:17.929 13:19:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.929 13:19:52 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.929 ************************************ 00:08:17.929 START TEST accel_decomp 00:08:17.929 ************************************ 00:08:17.929 13:19:52 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:17.929 13:19:52 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:17.929 13:19:52 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:17.929 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:17.929 13:19:52 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:17.929 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:17.929 13:19:52 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:17.929 13:19:52 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:17.929 13:19:52 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.929 13:19:52 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.929 13:19:52 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.929 13:19:52 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.929 13:19:52 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.929 13:19:52 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:17.929 13:19:52 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:17.929 [2024-07-13 13:19:52.187579] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:17.929 [2024-07-13 13:19:52.187702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167514 ] 00:08:17.929 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.929 [2024-07-13 13:19:52.322409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.929 [2024-07-13 13:19:52.582166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.189 13:19:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:20.713 13:19:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:20.713 13:19:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.713 13:19:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:20.713 13:19:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:20.713 13:19:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:20.713 13:19:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.713 13:19:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:20.713 13:19:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:20.713 13:19:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:20.713 13:19:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.713 13:19:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:20.713 13:19:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:20.713 13:19:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:20.713 13:19:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.713 13:19:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:20.714 13:19:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:20.714 13:19:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:20.714 13:19:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.714 13:19:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:20.714 13:19:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:20.714 13:19:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:20.714 13:19:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:20.714 13:19:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:20.714 13:19:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:20.714 13:19:54 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.714 13:19:54 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:20.714 13:19:54 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.714 00:08:20.714 real 0m2.705s 00:08:20.714 user 0m2.460s 00:08:20.714 sys 0m0.244s 00:08:20.714 13:19:54 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.714 13:19:54 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:20.714 ************************************ 00:08:20.714 END TEST accel_decomp 00:08:20.714 ************************************ 00:08:20.714 13:19:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:20.714 13:19:54 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:20.714 13:19:54 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:20.714 13:19:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.714 13:19:54 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.714 ************************************ 00:08:20.714 START TEST accel_decomp_full 00:08:20.714 ************************************ 00:08:20.714 13:19:54 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:20.714 13:19:54 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:20.714 13:19:54 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:20.714 13:19:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.714 13:19:54 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:20.714 13:19:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.714 13:19:54 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:20.714 13:19:54 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:20.714 13:19:54 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.714 13:19:54 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.714 13:19:54 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.714 13:19:54 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.714 13:19:54 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.714 13:19:54 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:20.714 13:19:54 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:20.714 [2024-07-13 13:19:54.940431] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:20.714 [2024-07-13 13:19:54.940553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167812 ] 00:08:20.714 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.714 [2024-07-13 13:19:55.068173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.714 [2024-07-13 13:19:55.328499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.973 13:19:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:22.872 13:19:57 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.872 00:08:22.872 real 0m2.711s 00:08:22.872 user 0m0.012s 00:08:22.872 sys 0m0.001s 00:08:22.872 13:19:57 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.872 13:19:57 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:22.872 ************************************ 00:08:22.872 END TEST accel_decomp_full 00:08:22.872 ************************************ 00:08:23.130 13:19:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:23.130 13:19:57 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:23.130 13:19:57 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:23.130 13:19:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.130 13:19:57 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.130 ************************************ 00:08:23.130 START TEST accel_decomp_mcore 00:08:23.130 ************************************ 00:08:23.130 13:19:57 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:23.130 13:19:57 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:23.130 13:19:57 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:23.130 13:19:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.130 13:19:57 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:23.130 13:19:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.130 13:19:57 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:23.130 13:19:57 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:23.130 13:19:57 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.130 13:19:57 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.130 13:19:57 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.130 13:19:57 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.130 13:19:57 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.130 13:19:57 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:23.130 13:19:57 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:23.130 [2024-07-13 13:19:57.702255] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:23.130 [2024-07-13 13:19:57.702371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168120 ] 00:08:23.130 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.130 [2024-07-13 13:19:57.834273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.388 [2024-07-13 13:19:58.102545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.388 [2024-07-13 13:19:58.102601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.388 [2024-07-13 13:19:58.102649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.388 [2024-07-13 13:19:58.102659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.646 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.646 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.646 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.646 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.646 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.646 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.646 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.646 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.646 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.646 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.646 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.646 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.646 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:23.646 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.646 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.647 13:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.546 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.805 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:25.805 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:25.805 13:20:00 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.805 00:08:25.805 real 0m2.638s 00:08:25.805 user 0m0.014s 00:08:25.805 sys 0m0.001s 00:08:25.805 13:20:00 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.805 13:20:00 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:25.805 ************************************ 00:08:25.805 END TEST accel_decomp_mcore 00:08:25.805 ************************************ 00:08:25.805 13:20:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:25.805 13:20:00 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:25.805 13:20:00 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:25.805 13:20:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.805 13:20:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:25.805 ************************************ 00:08:25.805 START TEST accel_decomp_full_mcore 00:08:25.805 ************************************ 00:08:25.805 13:20:00 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:25.805 13:20:00 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:25.805 13:20:00 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:25.805 13:20:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:25.805 13:20:00 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:25.805 13:20:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:25.805 13:20:00 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:25.805 13:20:00 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:25.806 13:20:00 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:25.806 13:20:00 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:25.806 13:20:00 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.806 13:20:00 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.806 13:20:00 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:25.806 13:20:00 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:25.806 13:20:00 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:25.806 [2024-07-13 13:20:00.387259] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:25.806 [2024-07-13 13:20:00.387410] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168510 ] 00:08:25.806 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.806 [2024-07-13 13:20:00.532654] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.065 [2024-07-13 13:20:00.800703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.065 [2024-07-13 13:20:00.800759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.065 [2024-07-13 13:20:00.800804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.065 [2024-07-13 13:20:00.800815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.356 13:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.895 00:08:28.895 real 0m2.712s 00:08:28.895 user 0m0.010s 00:08:28.895 sys 0m0.006s 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.895 13:20:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:28.895 ************************************ 00:08:28.895 END TEST accel_decomp_full_mcore 00:08:28.895 ************************************ 00:08:28.895 13:20:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:28.895 13:20:03 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:28.895 13:20:03 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:28.895 13:20:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.895 13:20:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:28.895 ************************************ 00:08:28.895 START TEST accel_decomp_mthread 00:08:28.895 ************************************ 00:08:28.895 13:20:03 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:28.895 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:28.895 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:28.895 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.895 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:28.895 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.895 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:28.895 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:28.895 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.895 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.895 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.895 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.895 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.895 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:28.895 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:28.895 [2024-07-13 13:20:03.148688] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:28.895 [2024-07-13 13:20:03.148812] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168811 ] 00:08:28.895 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.895 [2024-07-13 13:20:03.277743] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.895 [2024-07-13 13:20:03.540570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.154 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.155 13:20:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.067 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.325 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.325 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:31.325 13:20:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.325 00:08:31.325 real 0m2.708s 00:08:31.325 user 0m2.468s 00:08:31.325 sys 0m0.239s 00:08:31.325 13:20:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.325 13:20:05 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:31.325 ************************************ 00:08:31.325 END TEST accel_decomp_mthread 00:08:31.325 ************************************ 00:08:31.325 13:20:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:31.325 13:20:05 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:31.325 13:20:05 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:31.325 13:20:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.325 13:20:05 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.325 ************************************ 00:08:31.325 START TEST accel_decomp_full_mthread 00:08:31.325 ************************************ 00:08:31.325 13:20:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:31.325 13:20:05 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:31.325 13:20:05 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:31.325 13:20:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.325 13:20:05 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:31.325 13:20:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.325 13:20:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:31.325 13:20:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:31.325 13:20:05 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.325 13:20:05 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.325 13:20:05 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.325 13:20:05 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.325 13:20:05 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.325 13:20:05 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:31.325 13:20:05 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:31.325 [2024-07-13 13:20:05.905481] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:31.325 [2024-07-13 13:20:05.905609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169218 ] 00:08:31.325 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.325 [2024-07-13 13:20:06.036407] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.583 [2024-07-13 13:20:06.297449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.840 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.840 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.841 13:20:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:34.369 00:08:34.369 real 0m2.749s 00:08:34.369 user 0m0.011s 00:08:34.369 sys 0m0.003s 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.369 13:20:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:34.369 ************************************ 00:08:34.369 END TEST accel_decomp_full_mthread 00:08:34.369 ************************************ 00:08:34.369 13:20:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:34.369 13:20:08 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:34.369 13:20:08 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:34.369 13:20:08 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:34.369 13:20:08 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:34.369 13:20:08 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:34.369 13:20:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.369 13:20:08 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:34.369 13:20:08 accel -- common/autotest_common.sh@10 -- # set +x 00:08:34.369 13:20:08 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.369 13:20:08 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.369 13:20:08 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:34.369 13:20:08 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:34.369 13:20:08 accel -- accel/accel.sh@41 -- # jq -r . 00:08:34.369 ************************************ 00:08:34.369 START TEST accel_dif_functional_tests 00:08:34.369 ************************************ 00:08:34.369 13:20:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:34.369 [2024-07-13 13:20:08.739362] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:34.369 [2024-07-13 13:20:08.739498] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169509 ] 00:08:34.369 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.369 [2024-07-13 13:20:08.872412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:34.628 [2024-07-13 13:20:09.138313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.628 [2024-07-13 13:20:09.138361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.628 [2024-07-13 13:20:09.138370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.887 00:08:34.887 00:08:34.887 CUnit - A unit testing framework for C - Version 2.1-3 00:08:34.887 http://cunit.sourceforge.net/ 00:08:34.887 00:08:34.887 00:08:34.887 Suite: accel_dif 00:08:34.887 Test: verify: DIF generated, GUARD check ...passed 00:08:34.887 Test: verify: DIF generated, APPTAG check ...passed 00:08:34.887 Test: verify: DIF generated, REFTAG check ...passed 00:08:34.887 Test: verify: DIF not generated, GUARD check ...[2024-07-13 13:20:09.491722] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:34.887 passed 00:08:34.887 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 13:20:09.491834] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:34.887 passed 00:08:34.887 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 13:20:09.491915] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:34.887 passed 00:08:34.887 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:34.887 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-13 13:20:09.492051] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:34.887 passed 00:08:34.887 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:34.887 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:34.887 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:34.887 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 13:20:09.492328] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:34.887 passed 00:08:34.887 Test: verify copy: DIF generated, GUARD check ...passed 00:08:34.887 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:34.887 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:34.887 Test: verify copy: DIF not generated, GUARD check ...[2024-07-13 13:20:09.492638] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:34.887 passed 00:08:34.887 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 13:20:09.492728] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:34.887 passed 00:08:34.887 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-13 13:20:09.492811] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:34.887 passed 00:08:34.887 Test: generate copy: DIF generated, GUARD check ...passed 00:08:34.887 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:34.887 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:34.887 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:34.887 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:34.887 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:34.887 Test: generate copy: iovecs-len validate ...[2024-07-13 13:20:09.493301] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:34.887 passed 00:08:34.887 Test: generate copy: buffer alignment validate ...passed 00:08:34.887 00:08:34.887 Run Summary: Type Total Ran Passed Failed Inactive 00:08:34.887 suites 1 1 n/a 0 0 00:08:34.887 tests 26 26 26 0 0 00:08:34.887 asserts 115 115 115 0 n/a 00:08:34.887 00:08:34.887 Elapsed time = 0.005 seconds 00:08:36.263 00:08:36.263 real 0m2.167s 00:08:36.263 user 0m4.280s 00:08:36.263 sys 0m0.307s 00:08:36.263 13:20:10 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.263 13:20:10 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:36.263 ************************************ 00:08:36.263 END TEST accel_dif_functional_tests 00:08:36.263 ************************************ 00:08:36.263 13:20:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:36.263 00:08:36.263 real 1m4.880s 00:08:36.263 user 1m11.443s 00:08:36.263 sys 0m7.162s 00:08:36.263 13:20:10 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.263 13:20:10 accel -- common/autotest_common.sh@10 -- # set +x 00:08:36.263 ************************************ 00:08:36.263 END TEST accel 00:08:36.263 ************************************ 00:08:36.263 13:20:10 -- common/autotest_common.sh@1142 -- # return 0 00:08:36.263 13:20:10 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:36.263 13:20:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:36.263 13:20:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.263 13:20:10 -- common/autotest_common.sh@10 -- # set +x 00:08:36.263 ************************************ 00:08:36.263 START TEST accel_rpc 00:08:36.263 ************************************ 00:08:36.263 13:20:10 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:36.263 * Looking for test storage... 00:08:36.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:36.263 13:20:10 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:36.263 13:20:10 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=169839 00:08:36.263 13:20:10 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:36.263 13:20:10 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 169839 00:08:36.263 13:20:10 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 169839 ']' 00:08:36.263 13:20:10 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.263 13:20:10 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.263 13:20:10 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.263 13:20:10 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.263 13:20:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.521 [2024-07-13 13:20:11.058455] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:36.521 [2024-07-13 13:20:11.058638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169839 ] 00:08:36.521 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.521 [2024-07-13 13:20:11.205671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.780 [2024-07-13 13:20:11.466096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.346 13:20:11 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.346 13:20:11 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:37.346 13:20:11 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:37.346 13:20:11 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:37.346 13:20:11 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:37.346 13:20:11 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:37.346 13:20:11 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:37.346 13:20:11 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:37.346 13:20:11 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.346 13:20:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.346 ************************************ 00:08:37.346 START TEST accel_assign_opcode 00:08:37.346 ************************************ 00:08:37.346 13:20:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:37.346 13:20:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:37.346 13:20:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.346 13:20:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:37.346 [2024-07-13 13:20:12.024461] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:37.346 13:20:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.346 13:20:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:37.346 13:20:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.346 13:20:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:37.346 [2024-07-13 13:20:12.032436] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:37.346 13:20:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.346 13:20:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:37.346 13:20:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.346 13:20:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:38.290 13:20:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.290 13:20:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:38.290 13:20:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.290 13:20:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:38.290 13:20:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:38.290 13:20:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:38.290 13:20:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.290 software 00:08:38.290 00:08:38.290 real 0m0.944s 00:08:38.290 user 0m0.041s 00:08:38.290 sys 0m0.007s 00:08:38.290 13:20:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.290 13:20:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:38.290 ************************************ 00:08:38.290 END TEST accel_assign_opcode 00:08:38.290 ************************************ 00:08:38.290 13:20:12 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:38.290 13:20:12 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 169839 00:08:38.290 13:20:12 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 169839 ']' 00:08:38.290 13:20:12 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 169839 00:08:38.290 13:20:12 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:38.290 13:20:12 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:38.290 13:20:12 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 169839 00:08:38.290 13:20:13 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:38.290 13:20:13 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:38.290 13:20:13 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 169839' 00:08:38.290 killing process with pid 169839 00:08:38.290 13:20:13 accel_rpc -- common/autotest_common.sh@967 -- # kill 169839 00:08:38.290 13:20:13 accel_rpc -- common/autotest_common.sh@972 -- # wait 169839 00:08:40.822 00:08:40.822 real 0m4.638s 00:08:40.822 user 0m4.624s 00:08:40.822 sys 0m0.635s 00:08:40.822 13:20:15 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.822 13:20:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.822 ************************************ 00:08:40.822 END TEST accel_rpc 00:08:40.822 ************************************ 00:08:40.822 13:20:15 -- common/autotest_common.sh@1142 -- # return 0 00:08:40.822 13:20:15 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:40.822 13:20:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:40.822 13:20:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.822 13:20:15 -- common/autotest_common.sh@10 -- # set +x 00:08:41.080 ************************************ 00:08:41.080 START TEST app_cmdline 00:08:41.080 ************************************ 00:08:41.080 13:20:15 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:41.080 * Looking for test storage... 00:08:41.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:41.080 13:20:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:41.080 13:20:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=170442 00:08:41.080 13:20:15 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:41.080 13:20:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 170442 00:08:41.080 13:20:15 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 170442 ']' 00:08:41.080 13:20:15 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.080 13:20:15 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.080 13:20:15 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.080 13:20:15 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.080 13:20:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:41.080 [2024-07-13 13:20:15.733171] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:41.080 [2024-07-13 13:20:15.733332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170442 ] 00:08:41.080 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.338 [2024-07-13 13:20:15.866508] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.596 [2024-07-13 13:20:16.128022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.529 13:20:17 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.529 13:20:17 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:42.529 13:20:17 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:42.787 { 00:08:42.787 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:08:42.787 "fields": { 00:08:42.787 "major": 24, 00:08:42.787 "minor": 9, 00:08:42.787 "patch": 0, 00:08:42.787 "suffix": "-pre", 00:08:42.787 "commit": "719d03c6a" 00:08:42.787 } 00:08:42.787 } 00:08:42.787 13:20:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:42.787 13:20:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:42.787 13:20:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:42.787 13:20:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:42.787 13:20:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:42.787 13:20:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:42.787 13:20:17 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.787 13:20:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:42.787 13:20:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:42.787 13:20:17 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.787 13:20:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:42.787 13:20:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:42.787 13:20:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:42.787 13:20:17 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:42.787 13:20:17 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:42.787 13:20:17 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.787 13:20:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.787 13:20:17 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.787 13:20:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.787 13:20:17 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.787 13:20:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.787 13:20:17 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.787 13:20:17 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:42.787 13:20:17 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:43.045 request: 00:08:43.045 { 00:08:43.045 "method": "env_dpdk_get_mem_stats", 00:08:43.045 "req_id": 1 00:08:43.045 } 00:08:43.045 Got JSON-RPC error response 00:08:43.045 response: 00:08:43.045 { 00:08:43.045 "code": -32601, 00:08:43.045 "message": "Method not found" 00:08:43.045 } 00:08:43.045 13:20:17 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:43.045 13:20:17 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:43.045 13:20:17 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:43.045 13:20:17 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:43.045 13:20:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 170442 00:08:43.045 13:20:17 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 170442 ']' 00:08:43.045 13:20:17 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 170442 00:08:43.045 13:20:17 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:43.045 13:20:17 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:43.045 13:20:17 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 170442 00:08:43.045 13:20:17 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:43.045 13:20:17 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:43.045 13:20:17 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 170442' 00:08:43.045 killing process with pid 170442 00:08:43.045 13:20:17 app_cmdline -- common/autotest_common.sh@967 -- # kill 170442 00:08:43.045 13:20:17 app_cmdline -- common/autotest_common.sh@972 -- # wait 170442 00:08:45.601 00:08:45.601 real 0m4.656s 00:08:45.601 user 0m5.056s 00:08:45.601 sys 0m0.695s 00:08:45.601 13:20:20 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.601 13:20:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:45.601 ************************************ 00:08:45.601 END TEST app_cmdline 00:08:45.601 ************************************ 00:08:45.601 13:20:20 -- common/autotest_common.sh@1142 -- # return 0 00:08:45.601 13:20:20 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:45.601 13:20:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:45.601 13:20:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.601 13:20:20 -- common/autotest_common.sh@10 -- # set +x 00:08:45.601 ************************************ 00:08:45.601 START TEST version 00:08:45.601 ************************************ 00:08:45.601 13:20:20 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:45.859 * Looking for test storage... 00:08:45.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:45.859 13:20:20 version -- app/version.sh@17 -- # get_header_version major 00:08:45.859 13:20:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:45.859 13:20:20 version -- app/version.sh@14 -- # cut -f2 00:08:45.859 13:20:20 version -- app/version.sh@14 -- # tr -d '"' 00:08:45.859 13:20:20 version -- app/version.sh@17 -- # major=24 00:08:45.859 13:20:20 version -- app/version.sh@18 -- # get_header_version minor 00:08:45.859 13:20:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:45.859 13:20:20 version -- app/version.sh@14 -- # cut -f2 00:08:45.859 13:20:20 version -- app/version.sh@14 -- # tr -d '"' 00:08:45.859 13:20:20 version -- app/version.sh@18 -- # minor=9 00:08:45.859 13:20:20 version -- app/version.sh@19 -- # get_header_version patch 00:08:45.859 13:20:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:45.859 13:20:20 version -- app/version.sh@14 -- # cut -f2 00:08:45.859 13:20:20 version -- app/version.sh@14 -- # tr -d '"' 00:08:45.859 13:20:20 version -- app/version.sh@19 -- # patch=0 00:08:45.859 13:20:20 version -- app/version.sh@20 -- # get_header_version suffix 00:08:45.859 13:20:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:45.859 13:20:20 version -- app/version.sh@14 -- # cut -f2 00:08:45.859 13:20:20 version -- app/version.sh@14 -- # tr -d '"' 00:08:45.859 13:20:20 version -- app/version.sh@20 -- # suffix=-pre 00:08:45.859 13:20:20 version -- app/version.sh@22 -- # version=24.9 00:08:45.859 13:20:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:45.859 13:20:20 version -- app/version.sh@28 -- # version=24.9rc0 00:08:45.859 13:20:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:45.859 13:20:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:45.859 13:20:20 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:45.859 13:20:20 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:45.859 00:08:45.859 real 0m0.110s 00:08:45.859 user 0m0.062s 00:08:45.859 sys 0m0.071s 00:08:45.859 13:20:20 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.859 13:20:20 version -- common/autotest_common.sh@10 -- # set +x 00:08:45.859 ************************************ 00:08:45.859 END TEST version 00:08:45.859 ************************************ 00:08:45.859 13:20:20 -- common/autotest_common.sh@1142 -- # return 0 00:08:45.860 13:20:20 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:45.860 13:20:20 -- spdk/autotest.sh@198 -- # uname -s 00:08:45.860 13:20:20 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:45.860 13:20:20 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:45.860 13:20:20 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:45.860 13:20:20 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:45.860 13:20:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:45.860 13:20:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:45.860 13:20:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:45.860 13:20:20 -- common/autotest_common.sh@10 -- # set +x 00:08:45.860 13:20:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:45.860 13:20:20 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:45.860 13:20:20 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:45.860 13:20:20 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:45.860 13:20:20 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:45.860 13:20:20 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:45.860 13:20:20 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:45.860 13:20:20 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:45.860 13:20:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.860 13:20:20 -- common/autotest_common.sh@10 -- # set +x 00:08:45.860 ************************************ 00:08:45.860 START TEST nvmf_tcp 00:08:45.860 ************************************ 00:08:45.860 13:20:20 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:45.860 * Looking for test storage... 00:08:45.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.860 13:20:20 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.860 13:20:20 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.860 13:20:20 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.860 13:20:20 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.860 13:20:20 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.860 13:20:20 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.860 13:20:20 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:45.860 13:20:20 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:45.860 13:20:20 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:45.860 13:20:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:45.860 13:20:20 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:45.860 13:20:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:45.860 13:20:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.860 13:20:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:45.860 ************************************ 00:08:45.860 START TEST nvmf_example 00:08:45.860 ************************************ 00:08:45.860 13:20:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:45.860 * Looking for test storage... 00:08:45.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.860 13:20:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.860 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:45.860 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.860 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.860 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.860 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.860 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.860 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.860 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:46.119 13:20:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.016 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.016 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.016 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.016 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.016 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.016 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:48.017 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:48.017 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:48.017 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:48.017 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:08:48.017 00:08:48.017 --- 10.0.0.2 ping statistics --- 00:08:48.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.017 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:08:48.017 00:08:48.017 --- 10.0.0.1 ping statistics --- 00:08:48.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.017 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=172795 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 172795 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 172795 ']' 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.017 13:20:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.274 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.203 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.204 13:20:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.204 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.204 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.204 13:20:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.204 13:20:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:49.204 13:20:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:49.461 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.425 Initializing NVMe Controllers 00:08:59.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:59.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:59.425 Initialization complete. Launching workers. 00:08:59.425 ======================================================== 00:08:59.425 Latency(us) 00:08:59.425 Device Information : IOPS MiB/s Average min max 00:08:59.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11322.20 44.23 5655.36 1288.94 20488.38 00:08:59.425 ======================================================== 00:08:59.425 Total : 11322.20 44.23 5655.36 1288.94 20488.38 00:08:59.425 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.682 rmmod nvme_tcp 00:08:59.682 rmmod nvme_fabrics 00:08:59.682 rmmod nvme_keyring 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 172795 ']' 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 172795 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 172795 ']' 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 172795 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 172795 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 172795' 00:08:59.682 killing process with pid 172795 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 172795 00:08:59.682 13:20:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 172795 00:09:01.057 nvmf threads initialize successfully 00:09:01.057 bdev subsystem init successfully 00:09:01.057 created a nvmf target service 00:09:01.057 create targets's poll groups done 00:09:01.057 all subsystems of target started 00:09:01.057 nvmf target is running 00:09:01.057 all subsystems of target stopped 00:09:01.057 destroy targets's poll groups done 00:09:01.057 destroyed the nvmf target service 00:09:01.057 bdev subsystem finish successfully 00:09:01.057 nvmf threads destroy successfully 00:09:01.057 13:20:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:01.057 13:20:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:01.057 13:20:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:01.057 13:20:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:01.057 13:20:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:01.057 13:20:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.057 13:20:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.057 13:20:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.965 13:20:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:02.965 13:20:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:02.965 13:20:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:02.965 13:20:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:02.965 00:09:02.965 real 0m17.089s 00:09:02.965 user 0m44.404s 00:09:02.965 sys 0m4.639s 00:09:02.965 13:20:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.965 13:20:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:02.965 ************************************ 00:09:02.965 END TEST nvmf_example 00:09:02.965 ************************************ 00:09:02.965 13:20:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:02.965 13:20:37 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:02.965 13:20:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:02.965 13:20:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.965 13:20:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:02.965 ************************************ 00:09:02.965 START TEST nvmf_filesystem 00:09:02.965 ************************************ 00:09:02.965 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:03.227 * Looking for test storage... 00:09:03.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:03.227 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:03.228 #define SPDK_CONFIG_H 00:09:03.228 #define SPDK_CONFIG_APPS 1 00:09:03.228 #define SPDK_CONFIG_ARCH native 00:09:03.228 #define SPDK_CONFIG_ASAN 1 00:09:03.228 #undef SPDK_CONFIG_AVAHI 00:09:03.228 #undef SPDK_CONFIG_CET 00:09:03.228 #define SPDK_CONFIG_COVERAGE 1 00:09:03.228 #define SPDK_CONFIG_CROSS_PREFIX 00:09:03.228 #undef SPDK_CONFIG_CRYPTO 00:09:03.228 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:03.228 #undef SPDK_CONFIG_CUSTOMOCF 00:09:03.228 #undef SPDK_CONFIG_DAOS 00:09:03.228 #define SPDK_CONFIG_DAOS_DIR 00:09:03.228 #define SPDK_CONFIG_DEBUG 1 00:09:03.228 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:03.228 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:03.228 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:03.228 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:03.228 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:03.228 #undef SPDK_CONFIG_DPDK_UADK 00:09:03.228 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:03.228 #define SPDK_CONFIG_EXAMPLES 1 00:09:03.228 #undef SPDK_CONFIG_FC 00:09:03.228 #define SPDK_CONFIG_FC_PATH 00:09:03.228 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:03.228 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:03.228 #undef SPDK_CONFIG_FUSE 00:09:03.228 #undef SPDK_CONFIG_FUZZER 00:09:03.228 #define SPDK_CONFIG_FUZZER_LIB 00:09:03.228 #undef SPDK_CONFIG_GOLANG 00:09:03.228 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:03.228 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:03.228 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:03.228 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:03.228 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:03.228 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:03.228 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:03.228 #define SPDK_CONFIG_IDXD 1 00:09:03.228 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:03.228 #undef SPDK_CONFIG_IPSEC_MB 00:09:03.228 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:03.228 #define SPDK_CONFIG_ISAL 1 00:09:03.228 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:03.228 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:03.228 #define SPDK_CONFIG_LIBDIR 00:09:03.228 #undef SPDK_CONFIG_LTO 00:09:03.228 #define SPDK_CONFIG_MAX_LCORES 128 00:09:03.228 #define SPDK_CONFIG_NVME_CUSE 1 00:09:03.228 #undef SPDK_CONFIG_OCF 00:09:03.228 #define SPDK_CONFIG_OCF_PATH 00:09:03.228 #define SPDK_CONFIG_OPENSSL_PATH 00:09:03.228 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:03.228 #define SPDK_CONFIG_PGO_DIR 00:09:03.228 #undef SPDK_CONFIG_PGO_USE 00:09:03.228 #define SPDK_CONFIG_PREFIX /usr/local 00:09:03.228 #undef SPDK_CONFIG_RAID5F 00:09:03.228 #undef SPDK_CONFIG_RBD 00:09:03.228 #define SPDK_CONFIG_RDMA 1 00:09:03.228 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:03.228 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:03.228 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:03.228 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:03.228 #define SPDK_CONFIG_SHARED 1 00:09:03.228 #undef SPDK_CONFIG_SMA 00:09:03.228 #define SPDK_CONFIG_TESTS 1 00:09:03.228 #undef SPDK_CONFIG_TSAN 00:09:03.228 #define SPDK_CONFIG_UBLK 1 00:09:03.228 #define SPDK_CONFIG_UBSAN 1 00:09:03.228 #undef SPDK_CONFIG_UNIT_TESTS 00:09:03.228 #undef SPDK_CONFIG_URING 00:09:03.228 #define SPDK_CONFIG_URING_PATH 00:09:03.228 #undef SPDK_CONFIG_URING_ZNS 00:09:03.228 #undef SPDK_CONFIG_USDT 00:09:03.228 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:03.228 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:03.228 #undef SPDK_CONFIG_VFIO_USER 00:09:03.228 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:03.228 #define SPDK_CONFIG_VHOST 1 00:09:03.228 #define SPDK_CONFIG_VIRTIO 1 00:09:03.228 #undef SPDK_CONFIG_VTUNE 00:09:03.228 #define SPDK_CONFIG_VTUNE_DIR 00:09:03.228 #define SPDK_CONFIG_WERROR 1 00:09:03.228 #define SPDK_CONFIG_WPDK_DIR 00:09:03.228 #undef SPDK_CONFIG_XNVME 00:09:03.228 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:03.228 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:03.229 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 174690 ]] 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 174690 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.NDnOcv 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.NDnOcv/tests/target /tmp/spdk.NDnOcv 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55284826112 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994708992 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6709882880 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941716480 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997352448 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996402176 00:09:03.230 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=954368 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:03.231 * Looking for test storage... 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55284826112 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8924475392 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:03.231 13:20:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:03.232 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:03.232 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.232 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:03.232 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:03.232 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:03.232 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.232 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.232 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.232 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:03.232 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:03.232 13:20:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:03.232 13:20:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:05.763 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:05.763 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:05.763 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:05.763 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.763 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.764 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:05.764 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.764 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.764 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:05.764 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:05.764 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.764 13:20:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:05.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:09:05.764 00:09:05.764 --- 10.0.0.2 ping statistics --- 00:09:05.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.764 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:09:05.764 00:09:05.764 --- 10.0.0.1 ping statistics --- 00:09:05.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.764 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.764 ************************************ 00:09:05.764 START TEST nvmf_filesystem_no_in_capsule 00:09:05.764 ************************************ 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=176321 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 176321 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 176321 ']' 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:05.764 13:20:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.764 [2024-07-13 13:20:40.231896] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:05.764 [2024-07-13 13:20:40.232039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.764 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.764 [2024-07-13 13:20:40.367160] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.022 [2024-07-13 13:20:40.628091] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.022 [2024-07-13 13:20:40.628164] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.022 [2024-07-13 13:20:40.628192] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.022 [2024-07-13 13:20:40.628214] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.022 [2024-07-13 13:20:40.628235] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.022 [2024-07-13 13:20:40.628353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.022 [2024-07-13 13:20:40.628410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.022 [2024-07-13 13:20:40.628455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.022 [2024-07-13 13:20:40.628465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.620 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.620 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:06.620 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.620 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:06.620 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:06.620 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.620 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:06.620 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:06.620 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.620 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:06.620 [2024-07-13 13:20:41.185419] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.620 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.620 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:06.620 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.620 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.188 Malloc1 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.188 [2024-07-13 13:20:41.774570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:07.188 { 00:09:07.188 "name": "Malloc1", 00:09:07.188 "aliases": [ 00:09:07.188 "1f92c317-82ac-403f-aa7e-5bee9052ab00" 00:09:07.188 ], 00:09:07.188 "product_name": "Malloc disk", 00:09:07.188 "block_size": 512, 00:09:07.188 "num_blocks": 1048576, 00:09:07.188 "uuid": "1f92c317-82ac-403f-aa7e-5bee9052ab00", 00:09:07.188 "assigned_rate_limits": { 00:09:07.188 "rw_ios_per_sec": 0, 00:09:07.188 "rw_mbytes_per_sec": 0, 00:09:07.188 "r_mbytes_per_sec": 0, 00:09:07.188 "w_mbytes_per_sec": 0 00:09:07.188 }, 00:09:07.188 "claimed": true, 00:09:07.188 "claim_type": "exclusive_write", 00:09:07.188 "zoned": false, 00:09:07.188 "supported_io_types": { 00:09:07.188 "read": true, 00:09:07.188 "write": true, 00:09:07.188 "unmap": true, 00:09:07.188 "flush": true, 00:09:07.188 "reset": true, 00:09:07.188 "nvme_admin": false, 00:09:07.188 "nvme_io": false, 00:09:07.188 "nvme_io_md": false, 00:09:07.188 "write_zeroes": true, 00:09:07.188 "zcopy": true, 00:09:07.188 "get_zone_info": false, 00:09:07.188 "zone_management": false, 00:09:07.188 "zone_append": false, 00:09:07.188 "compare": false, 00:09:07.188 "compare_and_write": false, 00:09:07.188 "abort": true, 00:09:07.188 "seek_hole": false, 00:09:07.188 "seek_data": false, 00:09:07.188 "copy": true, 00:09:07.188 "nvme_iov_md": false 00:09:07.188 }, 00:09:07.188 "memory_domains": [ 00:09:07.188 { 00:09:07.188 "dma_device_id": "system", 00:09:07.188 "dma_device_type": 1 00:09:07.188 }, 00:09:07.188 { 00:09:07.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.188 "dma_device_type": 2 00:09:07.188 } 00:09:07.188 ], 00:09:07.188 "driver_specific": {} 00:09:07.188 } 00:09:07.188 ]' 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:07.188 13:20:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.754 13:20:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:07.754 13:20:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:07.754 13:20:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.754 13:20:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:07.754 13:20:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:10.277 13:20:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:10.534 13:20:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:11.906 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:11.907 ************************************ 00:09:11.907 START TEST filesystem_ext4 00:09:11.907 ************************************ 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:11.907 mke2fs 1.46.5 (30-Dec-2021) 00:09:11.907 Discarding device blocks: 0/522240 done 00:09:11.907 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:11.907 Filesystem UUID: c25ee36c-4f64-41f8-8662-9dcf63aed766 00:09:11.907 Superblock backups stored on blocks: 00:09:11.907 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:11.907 00:09:11.907 Allocating group tables: 0/64 done 00:09:11.907 Writing inode tables: 0/64 done 00:09:11.907 Creating journal (8192 blocks): done 00:09:11.907 Writing superblocks and filesystem accounting information: 0/64 done 00:09:11.907 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:11.907 13:20:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 176321 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:12.842 00:09:12.842 real 0m1.123s 00:09:12.842 user 0m0.015s 00:09:12.842 sys 0m0.057s 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:12.842 ************************************ 00:09:12.842 END TEST filesystem_ext4 00:09:12.842 ************************************ 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:12.842 ************************************ 00:09:12.842 START TEST filesystem_btrfs 00:09:12.842 ************************************ 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:12.842 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:13.100 btrfs-progs v6.6.2 00:09:13.100 See https://btrfs.readthedocs.io for more information. 00:09:13.100 00:09:13.100 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:13.100 NOTE: several default settings have changed in version 5.15, please make sure 00:09:13.100 this does not affect your deployments: 00:09:13.100 - DUP for metadata (-m dup) 00:09:13.100 - enabled no-holes (-O no-holes) 00:09:13.100 - enabled free-space-tree (-R free-space-tree) 00:09:13.100 00:09:13.100 Label: (null) 00:09:13.100 UUID: 15c7a6ff-00f1-41a9-bf47-fed74c4ebcad 00:09:13.100 Node size: 16384 00:09:13.100 Sector size: 4096 00:09:13.100 Filesystem size: 510.00MiB 00:09:13.100 Block group profiles: 00:09:13.100 Data: single 8.00MiB 00:09:13.100 Metadata: DUP 32.00MiB 00:09:13.100 System: DUP 8.00MiB 00:09:13.100 SSD detected: yes 00:09:13.100 Zoned device: no 00:09:13.100 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:13.100 Runtime features: free-space-tree 00:09:13.100 Checksum: crc32c 00:09:13.100 Number of devices: 1 00:09:13.100 Devices: 00:09:13.100 ID SIZE PATH 00:09:13.100 1 510.00MiB /dev/nvme0n1p1 00:09:13.100 00:09:13.100 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:13.100 13:20:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 176321 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:14.034 00:09:14.034 real 0m1.153s 00:09:14.034 user 0m0.018s 00:09:14.034 sys 0m0.109s 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:14.034 ************************************ 00:09:14.034 END TEST filesystem_btrfs 00:09:14.034 ************************************ 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.034 ************************************ 00:09:14.034 START TEST filesystem_xfs 00:09:14.034 ************************************ 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:14.034 13:20:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:14.034 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:14.034 = sectsz=512 attr=2, projid32bit=1 00:09:14.034 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:14.034 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:14.034 data = bsize=4096 blocks=130560, imaxpct=25 00:09:14.034 = sunit=0 swidth=0 blks 00:09:14.034 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:14.034 log =internal log bsize=4096 blocks=16384, version=2 00:09:14.034 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:14.034 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:15.408 Discarding blocks...Done. 00:09:15.408 13:20:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:15.408 13:20:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 176321 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:17.308 00:09:17.308 real 0m2.964s 00:09:17.308 user 0m0.028s 00:09:17.308 sys 0m0.048s 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:17.308 ************************************ 00:09:17.308 END TEST filesystem_xfs 00:09:17.308 ************************************ 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:17.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 176321 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 176321 ']' 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 176321 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 176321 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 176321' 00:09:17.308 killing process with pid 176321 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 176321 00:09:17.308 13:20:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 176321 00:09:19.835 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:19.835 00:09:19.835 real 0m14.285s 00:09:19.835 user 0m52.641s 00:09:19.835 sys 0m1.963s 00:09:19.835 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.835 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.835 ************************************ 00:09:19.835 END TEST nvmf_filesystem_no_in_capsule 00:09:19.835 ************************************ 00:09:19.835 13:20:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:19.835 13:20:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:19.835 13:20:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:19.835 13:20:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.835 13:20:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:19.835 ************************************ 00:09:19.835 START TEST nvmf_filesystem_in_capsule 00:09:19.835 ************************************ 00:09:19.835 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:09:19.835 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:19.835 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:19.835 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:19.835 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:19.835 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.835 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=178156 00:09:19.836 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:19.836 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 178156 00:09:19.836 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 178156 ']' 00:09:19.836 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.836 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:19.836 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.836 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:19.836 13:20:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.836 [2024-07-13 13:20:54.579096] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:19.836 [2024-07-13 13:20:54.579259] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.094 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.094 [2024-07-13 13:20:54.737451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.352 [2024-07-13 13:20:55.002111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.352 [2024-07-13 13:20:55.002193] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.352 [2024-07-13 13:20:55.002221] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.352 [2024-07-13 13:20:55.002241] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.352 [2024-07-13 13:20:55.002263] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.352 [2024-07-13 13:20:55.002393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.352 [2024-07-13 13:20:55.002450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.352 [2024-07-13 13:20:55.002515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.352 [2024-07-13 13:20:55.002526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.919 13:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.920 13:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:20.920 13:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.920 13:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:20.920 13:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.920 13:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.920 13:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:20.920 13:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:20.920 13:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.920 13:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.920 [2024-07-13 13:20:55.580209] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.920 13:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.920 13:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:20.920 13:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.920 13:20:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.487 Malloc1 00:09:21.487 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.487 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:21.487 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.487 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.488 [2024-07-13 13:20:56.155443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:21.488 { 00:09:21.488 "name": "Malloc1", 00:09:21.488 "aliases": [ 00:09:21.488 "39aff479-2c6c-4bbc-b2fd-df4abdf2ccc9" 00:09:21.488 ], 00:09:21.488 "product_name": "Malloc disk", 00:09:21.488 "block_size": 512, 00:09:21.488 "num_blocks": 1048576, 00:09:21.488 "uuid": "39aff479-2c6c-4bbc-b2fd-df4abdf2ccc9", 00:09:21.488 "assigned_rate_limits": { 00:09:21.488 "rw_ios_per_sec": 0, 00:09:21.488 "rw_mbytes_per_sec": 0, 00:09:21.488 "r_mbytes_per_sec": 0, 00:09:21.488 "w_mbytes_per_sec": 0 00:09:21.488 }, 00:09:21.488 "claimed": true, 00:09:21.488 "claim_type": "exclusive_write", 00:09:21.488 "zoned": false, 00:09:21.488 "supported_io_types": { 00:09:21.488 "read": true, 00:09:21.488 "write": true, 00:09:21.488 "unmap": true, 00:09:21.488 "flush": true, 00:09:21.488 "reset": true, 00:09:21.488 "nvme_admin": false, 00:09:21.488 "nvme_io": false, 00:09:21.488 "nvme_io_md": false, 00:09:21.488 "write_zeroes": true, 00:09:21.488 "zcopy": true, 00:09:21.488 "get_zone_info": false, 00:09:21.488 "zone_management": false, 00:09:21.488 "zone_append": false, 00:09:21.488 "compare": false, 00:09:21.488 "compare_and_write": false, 00:09:21.488 "abort": true, 00:09:21.488 "seek_hole": false, 00:09:21.488 "seek_data": false, 00:09:21.488 "copy": true, 00:09:21.488 "nvme_iov_md": false 00:09:21.488 }, 00:09:21.488 "memory_domains": [ 00:09:21.488 { 00:09:21.488 "dma_device_id": "system", 00:09:21.488 "dma_device_type": 1 00:09:21.488 }, 00:09:21.488 { 00:09:21.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.488 "dma_device_type": 2 00:09:21.488 } 00:09:21.488 ], 00:09:21.488 "driver_specific": {} 00:09:21.488 } 00:09:21.488 ]' 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:21.488 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:21.746 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:21.746 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:21.746 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:21.746 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:21.746 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:22.313 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:22.313 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:22.313 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:22.313 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:22.313 13:20:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:24.242 13:20:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:24.807 13:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:25.374 13:20:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:26.308 13:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:26.308 13:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:26.309 13:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:26.309 13:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.309 13:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:26.309 ************************************ 00:09:26.309 START TEST filesystem_in_capsule_ext4 00:09:26.309 ************************************ 00:09:26.309 13:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:26.309 13:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:26.309 13:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:26.309 13:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:26.309 13:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:26.309 13:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:26.309 13:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:26.309 13:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:26.309 13:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:26.309 13:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:26.309 13:21:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:26.309 mke2fs 1.46.5 (30-Dec-2021) 00:09:26.567 Discarding device blocks: 0/522240 done 00:09:26.567 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:26.567 Filesystem UUID: 7df63511-3c2e-4076-969b-2bfdb639f950 00:09:26.567 Superblock backups stored on blocks: 00:09:26.567 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:26.567 00:09:26.567 Allocating group tables: 0/64 done 00:09:26.567 Writing inode tables: 0/64 done 00:09:26.825 Creating journal (8192 blocks): done 00:09:27.082 Writing superblocks and filesystem accounting information: 0/64 done 00:09:27.082 00:09:27.082 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:27.082 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 178156 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:27.341 00:09:27.341 real 0m0.998s 00:09:27.341 user 0m0.015s 00:09:27.341 sys 0m0.052s 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:27.341 ************************************ 00:09:27.341 END TEST filesystem_in_capsule_ext4 00:09:27.341 ************************************ 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.341 13:21:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:27.341 ************************************ 00:09:27.341 START TEST filesystem_in_capsule_btrfs 00:09:27.341 ************************************ 00:09:27.341 13:21:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:27.341 13:21:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:27.341 13:21:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:27.341 13:21:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:27.341 13:21:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:27.341 13:21:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:27.341 13:21:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:27.341 13:21:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:27.341 13:21:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:27.341 13:21:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:27.341 13:21:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:27.906 btrfs-progs v6.6.2 00:09:27.906 See https://btrfs.readthedocs.io for more information. 00:09:27.906 00:09:27.906 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:27.906 NOTE: several default settings have changed in version 5.15, please make sure 00:09:27.906 this does not affect your deployments: 00:09:27.906 - DUP for metadata (-m dup) 00:09:27.906 - enabled no-holes (-O no-holes) 00:09:27.906 - enabled free-space-tree (-R free-space-tree) 00:09:27.906 00:09:27.906 Label: (null) 00:09:27.906 UUID: 5bb51b57-98ab-44e5-ab43-80c4c13f2daa 00:09:27.906 Node size: 16384 00:09:27.906 Sector size: 4096 00:09:27.906 Filesystem size: 510.00MiB 00:09:27.906 Block group profiles: 00:09:27.906 Data: single 8.00MiB 00:09:27.906 Metadata: DUP 32.00MiB 00:09:27.906 System: DUP 8.00MiB 00:09:27.906 SSD detected: yes 00:09:27.906 Zoned device: no 00:09:27.906 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:27.906 Runtime features: free-space-tree 00:09:27.906 Checksum: crc32c 00:09:27.906 Number of devices: 1 00:09:27.906 Devices: 00:09:27.906 ID SIZE PATH 00:09:27.906 1 510.00MiB /dev/nvme0n1p1 00:09:27.906 00:09:27.906 13:21:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:27.906 13:21:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 178156 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:28.838 00:09:28.838 real 0m1.329s 00:09:28.838 user 0m0.018s 00:09:28.838 sys 0m0.122s 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:28.838 ************************************ 00:09:28.838 END TEST filesystem_in_capsule_btrfs 00:09:28.838 ************************************ 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.838 ************************************ 00:09:28.838 START TEST filesystem_in_capsule_xfs 00:09:28.838 ************************************ 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:28.838 13:21:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:28.838 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:28.838 = sectsz=512 attr=2, projid32bit=1 00:09:28.838 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:28.838 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:28.838 data = bsize=4096 blocks=130560, imaxpct=25 00:09:28.838 = sunit=0 swidth=0 blks 00:09:28.838 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:28.838 log =internal log bsize=4096 blocks=16384, version=2 00:09:28.838 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:28.838 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:29.769 Discarding blocks...Done. 00:09:29.769 13:21:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:29.769 13:21:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:31.662 13:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:31.662 13:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:31.662 13:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:31.662 13:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:31.662 13:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:31.662 13:21:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 178156 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:31.662 00:09:31.662 real 0m2.642s 00:09:31.662 user 0m0.009s 00:09:31.662 sys 0m0.063s 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:31.662 ************************************ 00:09:31.662 END TEST filesystem_in_capsule_xfs 00:09:31.662 ************************************ 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:31.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 178156 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 178156 ']' 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 178156 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 178156 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 178156' 00:09:31.662 killing process with pid 178156 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 178156 00:09:31.662 13:21:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 178156 00:09:34.186 13:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:34.186 00:09:34.186 real 0m14.436s 00:09:34.186 user 0m53.242s 00:09:34.186 sys 0m1.946s 00:09:34.186 13:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.186 13:21:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.186 ************************************ 00:09:34.186 END TEST nvmf_filesystem_in_capsule 00:09:34.186 ************************************ 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:34.446 rmmod nvme_tcp 00:09:34.446 rmmod nvme_fabrics 00:09:34.446 rmmod nvme_keyring 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.446 13:21:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.352 13:21:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:36.353 00:09:36.353 real 0m33.340s 00:09:36.353 user 1m46.862s 00:09:36.353 sys 0m5.545s 00:09:36.353 13:21:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.353 13:21:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:36.353 ************************************ 00:09:36.353 END TEST nvmf_filesystem 00:09:36.353 ************************************ 00:09:36.353 13:21:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:36.353 13:21:11 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:36.353 13:21:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:36.353 13:21:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.353 13:21:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.353 ************************************ 00:09:36.353 START TEST nvmf_target_discovery 00:09:36.353 ************************************ 00:09:36.353 13:21:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:36.610 * Looking for test storage... 00:09:36.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.610 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:36.611 13:21:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:38.513 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:38.513 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:38.513 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:38.513 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:38.513 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:38.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:09:38.775 00:09:38.775 --- 10.0.0.2 ping statistics --- 00:09:38.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.775 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:09:38.775 00:09:38.775 --- 10.0.0.1 ping statistics --- 00:09:38.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.775 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:38.775 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=182647 00:09:38.776 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:38.776 13:21:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 182647 00:09:38.776 13:21:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 182647 ']' 00:09:38.776 13:21:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.776 13:21:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.776 13:21:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.776 13:21:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.776 13:21:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:38.776 [2024-07-13 13:21:13.474695] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:38.776 [2024-07-13 13:21:13.474874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.034 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.034 [2024-07-13 13:21:13.618026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.292 [2024-07-13 13:21:13.887951] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.292 [2024-07-13 13:21:13.888022] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.292 [2024-07-13 13:21:13.888051] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.292 [2024-07-13 13:21:13.888073] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.292 [2024-07-13 13:21:13.888095] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.292 [2024-07-13 13:21:13.888217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.292 [2024-07-13 13:21:13.889909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.292 [2024-07-13 13:21:13.889948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.292 [2024-07-13 13:21:13.889956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.883 [2024-07-13 13:21:14.426097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.883 Null1 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.883 [2024-07-13 13:21:14.467678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.883 Null2 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.883 Null3 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.883 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.884 Null4 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.884 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:40.142 00:09:40.142 Discovery Log Number of Records 6, Generation counter 6 00:09:40.142 =====Discovery Log Entry 0====== 00:09:40.142 trtype: tcp 00:09:40.142 adrfam: ipv4 00:09:40.142 subtype: current discovery subsystem 00:09:40.142 treq: not required 00:09:40.142 portid: 0 00:09:40.142 trsvcid: 4420 00:09:40.142 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:40.142 traddr: 10.0.0.2 00:09:40.142 eflags: explicit discovery connections, duplicate discovery information 00:09:40.142 sectype: none 00:09:40.142 =====Discovery Log Entry 1====== 00:09:40.142 trtype: tcp 00:09:40.142 adrfam: ipv4 00:09:40.142 subtype: nvme subsystem 00:09:40.142 treq: not required 00:09:40.142 portid: 0 00:09:40.142 trsvcid: 4420 00:09:40.142 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:40.142 traddr: 10.0.0.2 00:09:40.142 eflags: none 00:09:40.142 sectype: none 00:09:40.142 =====Discovery Log Entry 2====== 00:09:40.142 trtype: tcp 00:09:40.142 adrfam: ipv4 00:09:40.142 subtype: nvme subsystem 00:09:40.142 treq: not required 00:09:40.142 portid: 0 00:09:40.142 trsvcid: 4420 00:09:40.142 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:40.142 traddr: 10.0.0.2 00:09:40.142 eflags: none 00:09:40.142 sectype: none 00:09:40.142 =====Discovery Log Entry 3====== 00:09:40.142 trtype: tcp 00:09:40.142 adrfam: ipv4 00:09:40.142 subtype: nvme subsystem 00:09:40.142 treq: not required 00:09:40.142 portid: 0 00:09:40.142 trsvcid: 4420 00:09:40.142 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:40.142 traddr: 10.0.0.2 00:09:40.142 eflags: none 00:09:40.142 sectype: none 00:09:40.142 =====Discovery Log Entry 4====== 00:09:40.142 trtype: tcp 00:09:40.142 adrfam: ipv4 00:09:40.142 subtype: nvme subsystem 00:09:40.142 treq: not required 00:09:40.142 portid: 0 00:09:40.142 trsvcid: 4420 00:09:40.142 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:40.142 traddr: 10.0.0.2 00:09:40.142 eflags: none 00:09:40.142 sectype: none 00:09:40.142 =====Discovery Log Entry 5====== 00:09:40.142 trtype: tcp 00:09:40.142 adrfam: ipv4 00:09:40.142 subtype: discovery subsystem referral 00:09:40.142 treq: not required 00:09:40.142 portid: 0 00:09:40.142 trsvcid: 4430 00:09:40.142 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:40.142 traddr: 10.0.0.2 00:09:40.142 eflags: none 00:09:40.142 sectype: none 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:40.142 Perform nvmf subsystem discovery via RPC 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.142 [ 00:09:40.142 { 00:09:40.142 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:40.142 "subtype": "Discovery", 00:09:40.142 "listen_addresses": [ 00:09:40.142 { 00:09:40.142 "trtype": "TCP", 00:09:40.142 "adrfam": "IPv4", 00:09:40.142 "traddr": "10.0.0.2", 00:09:40.142 "trsvcid": "4420" 00:09:40.142 } 00:09:40.142 ], 00:09:40.142 "allow_any_host": true, 00:09:40.142 "hosts": [] 00:09:40.142 }, 00:09:40.142 { 00:09:40.142 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.142 "subtype": "NVMe", 00:09:40.142 "listen_addresses": [ 00:09:40.142 { 00:09:40.142 "trtype": "TCP", 00:09:40.142 "adrfam": "IPv4", 00:09:40.142 "traddr": "10.0.0.2", 00:09:40.142 "trsvcid": "4420" 00:09:40.142 } 00:09:40.142 ], 00:09:40.142 "allow_any_host": true, 00:09:40.142 "hosts": [], 00:09:40.142 "serial_number": "SPDK00000000000001", 00:09:40.142 "model_number": "SPDK bdev Controller", 00:09:40.142 "max_namespaces": 32, 00:09:40.142 "min_cntlid": 1, 00:09:40.142 "max_cntlid": 65519, 00:09:40.142 "namespaces": [ 00:09:40.142 { 00:09:40.142 "nsid": 1, 00:09:40.142 "bdev_name": "Null1", 00:09:40.142 "name": "Null1", 00:09:40.142 "nguid": "D2495582B302437A9EF307AD0554A288", 00:09:40.142 "uuid": "d2495582-b302-437a-9ef3-07ad0554a288" 00:09:40.142 } 00:09:40.142 ] 00:09:40.142 }, 00:09:40.142 { 00:09:40.142 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:40.142 "subtype": "NVMe", 00:09:40.142 "listen_addresses": [ 00:09:40.142 { 00:09:40.142 "trtype": "TCP", 00:09:40.142 "adrfam": "IPv4", 00:09:40.142 "traddr": "10.0.0.2", 00:09:40.142 "trsvcid": "4420" 00:09:40.142 } 00:09:40.142 ], 00:09:40.142 "allow_any_host": true, 00:09:40.142 "hosts": [], 00:09:40.142 "serial_number": "SPDK00000000000002", 00:09:40.142 "model_number": "SPDK bdev Controller", 00:09:40.142 "max_namespaces": 32, 00:09:40.142 "min_cntlid": 1, 00:09:40.142 "max_cntlid": 65519, 00:09:40.142 "namespaces": [ 00:09:40.142 { 00:09:40.142 "nsid": 1, 00:09:40.142 "bdev_name": "Null2", 00:09:40.142 "name": "Null2", 00:09:40.142 "nguid": "217EE6284E474499BF232564E01FC5EE", 00:09:40.142 "uuid": "217ee628-4e47-4499-bf23-2564e01fc5ee" 00:09:40.142 } 00:09:40.142 ] 00:09:40.142 }, 00:09:40.142 { 00:09:40.142 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:40.142 "subtype": "NVMe", 00:09:40.142 "listen_addresses": [ 00:09:40.142 { 00:09:40.142 "trtype": "TCP", 00:09:40.142 "adrfam": "IPv4", 00:09:40.142 "traddr": "10.0.0.2", 00:09:40.142 "trsvcid": "4420" 00:09:40.142 } 00:09:40.142 ], 00:09:40.142 "allow_any_host": true, 00:09:40.142 "hosts": [], 00:09:40.142 "serial_number": "SPDK00000000000003", 00:09:40.142 "model_number": "SPDK bdev Controller", 00:09:40.142 "max_namespaces": 32, 00:09:40.142 "min_cntlid": 1, 00:09:40.142 "max_cntlid": 65519, 00:09:40.142 "namespaces": [ 00:09:40.142 { 00:09:40.142 "nsid": 1, 00:09:40.142 "bdev_name": "Null3", 00:09:40.142 "name": "Null3", 00:09:40.142 "nguid": "ED411A6C02774F71BB3B3395CD978259", 00:09:40.142 "uuid": "ed411a6c-0277-4f71-bb3b-3395cd978259" 00:09:40.142 } 00:09:40.142 ] 00:09:40.142 }, 00:09:40.142 { 00:09:40.142 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:40.142 "subtype": "NVMe", 00:09:40.142 "listen_addresses": [ 00:09:40.142 { 00:09:40.142 "trtype": "TCP", 00:09:40.142 "adrfam": "IPv4", 00:09:40.142 "traddr": "10.0.0.2", 00:09:40.142 "trsvcid": "4420" 00:09:40.142 } 00:09:40.142 ], 00:09:40.142 "allow_any_host": true, 00:09:40.142 "hosts": [], 00:09:40.142 "serial_number": "SPDK00000000000004", 00:09:40.142 "model_number": "SPDK bdev Controller", 00:09:40.142 "max_namespaces": 32, 00:09:40.142 "min_cntlid": 1, 00:09:40.142 "max_cntlid": 65519, 00:09:40.142 "namespaces": [ 00:09:40.142 { 00:09:40.142 "nsid": 1, 00:09:40.142 "bdev_name": "Null4", 00:09:40.142 "name": "Null4", 00:09:40.142 "nguid": "A2572E3310EA4101BA91F5BBC0D5210A", 00:09:40.142 "uuid": "a2572e33-10ea-4101-ba91-f5bbc0d5210a" 00:09:40.142 } 00:09:40.142 ] 00:09:40.142 } 00:09:40.142 ] 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:40.142 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:40.143 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.143 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.143 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.143 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:40.143 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.143 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.143 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.143 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:40.143 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:40.143 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.143 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.143 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.143 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:40.143 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.143 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:40.401 rmmod nvme_tcp 00:09:40.401 rmmod nvme_fabrics 00:09:40.401 rmmod nvme_keyring 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 182647 ']' 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 182647 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 182647 ']' 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 182647 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.401 13:21:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 182647 00:09:40.401 13:21:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:40.401 13:21:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:40.401 13:21:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 182647' 00:09:40.401 killing process with pid 182647 00:09:40.401 13:21:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 182647 00:09:40.401 13:21:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 182647 00:09:41.774 13:21:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:41.774 13:21:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:41.774 13:21:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:41.774 13:21:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:41.774 13:21:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:41.774 13:21:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.774 13:21:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.774 13:21:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.682 13:21:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:43.682 00:09:43.682 real 0m7.267s 00:09:43.682 user 0m9.087s 00:09:43.682 sys 0m2.067s 00:09:43.682 13:21:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.682 13:21:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:43.682 ************************************ 00:09:43.682 END TEST nvmf_target_discovery 00:09:43.682 ************************************ 00:09:43.682 13:21:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:43.682 13:21:18 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:43.682 13:21:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:43.682 13:21:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.682 13:21:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:43.682 ************************************ 00:09:43.682 START TEST nvmf_referrals 00:09:43.682 ************************************ 00:09:43.682 13:21:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:43.941 * Looking for test storage... 00:09:43.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.941 13:21:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:43.942 13:21:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:45.844 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:45.844 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:45.844 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:45.844 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.844 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:45.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:09:45.845 00:09:45.845 --- 10.0.0.2 ping statistics --- 00:09:45.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.845 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:09:45.845 00:09:45.845 --- 10.0.0.1 ping statistics --- 00:09:45.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.845 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=184892 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 184892 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 184892 ']' 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:45.845 13:21:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:46.103 [2024-07-13 13:21:20.636038] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:46.103 [2024-07-13 13:21:20.636202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.103 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.103 [2024-07-13 13:21:20.794115] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.361 [2024-07-13 13:21:21.063191] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.361 [2024-07-13 13:21:21.063270] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.361 [2024-07-13 13:21:21.063298] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.361 [2024-07-13 13:21:21.063318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.361 [2024-07-13 13:21:21.063339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.361 [2024-07-13 13:21:21.063471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.361 [2024-07-13 13:21:21.063530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.361 [2024-07-13 13:21:21.063590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.361 [2024-07-13 13:21:21.063601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:46.928 [2024-07-13 13:21:21.612223] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:46.928 [2024-07-13 13:21:21.625674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:46.928 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.185 13:21:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.443 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:47.443 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:47.443 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:47.443 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:47.443 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:47.443 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:47.443 13:21:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:47.443 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:47.701 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:47.701 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:47.701 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:47.701 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:47.701 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:47.701 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:47.701 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:47.701 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:47.701 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:47.701 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:47.701 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:47.701 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:47.701 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:47.958 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:48.215 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:48.215 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:48.215 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:48.215 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:48.215 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:48.215 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:48.215 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:48.215 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:48.215 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:48.215 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:48.215 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:48.215 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:48.215 13:21:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:48.472 13:21:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:48.472 rmmod nvme_tcp 00:09:48.472 rmmod nvme_fabrics 00:09:48.729 rmmod nvme_keyring 00:09:48.729 13:21:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:48.729 13:21:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:48.729 13:21:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:48.729 13:21:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 184892 ']' 00:09:48.729 13:21:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 184892 00:09:48.729 13:21:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 184892 ']' 00:09:48.729 13:21:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 184892 00:09:48.729 13:21:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:48.729 13:21:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:48.729 13:21:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 184892 00:09:48.729 13:21:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:48.729 13:21:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:48.729 13:21:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 184892' 00:09:48.729 killing process with pid 184892 00:09:48.729 13:21:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 184892 00:09:48.729 13:21:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 184892 00:09:50.104 13:21:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:50.104 13:21:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:50.104 13:21:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:50.104 13:21:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.104 13:21:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:50.104 13:21:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.104 13:21:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.104 13:21:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.006 13:21:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:52.006 00:09:52.006 real 0m8.224s 00:09:52.006 user 0m13.792s 00:09:52.006 sys 0m2.321s 00:09:52.006 13:21:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.006 13:21:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:52.006 ************************************ 00:09:52.006 END TEST nvmf_referrals 00:09:52.006 ************************************ 00:09:52.006 13:21:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:52.006 13:21:26 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:52.006 13:21:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:52.006 13:21:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.006 13:21:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:52.006 ************************************ 00:09:52.006 START TEST nvmf_connect_disconnect 00:09:52.006 ************************************ 00:09:52.006 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:52.006 * Looking for test storage... 00:09:52.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.006 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.006 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:52.006 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.006 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.006 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.006 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.006 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.006 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.006 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.007 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.265 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:52.265 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:52.265 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:52.265 13:21:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:54.164 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:54.164 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.164 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:54.165 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:54.165 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.165 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:54.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:09:54.424 00:09:54.424 --- 10.0.0.2 ping statistics --- 00:09:54.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.424 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:09:54.424 00:09:54.424 --- 10.0.0.1 ping statistics --- 00:09:54.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.424 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=187439 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 187439 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 187439 ']' 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:54.424 13:21:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:54.424 [2024-07-13 13:21:29.044230] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:54.424 [2024-07-13 13:21:29.044374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.424 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.682 [2024-07-13 13:21:29.188723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.940 [2024-07-13 13:21:29.457268] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.940 [2024-07-13 13:21:29.457328] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.940 [2024-07-13 13:21:29.457366] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.940 [2024-07-13 13:21:29.457384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.940 [2024-07-13 13:21:29.457402] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.940 [2024-07-13 13:21:29.457531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.940 [2024-07-13 13:21:29.457580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.940 [2024-07-13 13:21:29.457639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.940 [2024-07-13 13:21:29.457650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.531 13:21:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:55.531 13:21:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:55.531 13:21:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:55.531 13:21:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:55.531 13:21:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.531 [2024-07-13 13:21:30.021331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.531 [2024-07-13 13:21:30.136537] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:55.531 13:21:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:58.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.218 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:49.218 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:49.219 rmmod nvme_tcp 00:13:49.219 rmmod nvme_fabrics 00:13:49.219 rmmod nvme_keyring 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 187439 ']' 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 187439 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 187439 ']' 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 187439 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 187439 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 187439' 00:13:49.219 killing process with pid 187439 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 187439 00:13:49.219 13:25:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 187439 00:13:51.119 13:25:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:51.119 13:25:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:51.119 13:25:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:51.119 13:25:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.119 13:25:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:51.119 13:25:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.119 13:25:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.119 13:25:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.022 13:25:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:53.022 00:13:53.022 real 4m0.724s 00:13:53.022 user 15m10.319s 00:13:53.022 sys 0m37.278s 00:13:53.022 13:25:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:53.022 13:25:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:53.022 ************************************ 00:13:53.022 END TEST nvmf_connect_disconnect 00:13:53.022 ************************************ 00:13:53.022 13:25:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:53.022 13:25:27 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:53.022 13:25:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:53.022 13:25:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.022 13:25:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:53.022 ************************************ 00:13:53.022 START TEST nvmf_multitarget 00:13:53.022 ************************************ 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:53.022 * Looking for test storage... 00:13:53.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:53.022 13:25:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:54.923 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:54.923 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:54.923 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:54.923 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:54.923 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:54.923 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:54.923 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:54.923 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:54.923 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:54.923 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:54.923 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:54.923 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:54.923 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:54.923 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:54.923 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:54.923 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:54.924 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:54.924 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:54.924 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:54.924 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:54.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:13:54.924 00:13:54.924 --- 10.0.0.2 ping statistics --- 00:13:54.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.924 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:13:54.924 00:13:54.924 --- 10.0.0.1 ping statistics --- 00:13:54.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.924 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=218909 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 218909 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 218909 ']' 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:54.924 13:25:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:54.924 [2024-07-13 13:25:29.598933] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:54.924 [2024-07-13 13:25:29.599098] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.182 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.182 [2024-07-13 13:25:29.733227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:55.441 [2024-07-13 13:25:29.996264] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.441 [2024-07-13 13:25:29.996349] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.441 [2024-07-13 13:25:29.996378] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.441 [2024-07-13 13:25:29.996400] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.441 [2024-07-13 13:25:29.996422] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.441 [2024-07-13 13:25:29.996548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.441 [2024-07-13 13:25:29.996605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.441 [2024-07-13 13:25:29.996651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.441 [2024-07-13 13:25:29.996662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.005 13:25:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:56.005 13:25:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:13:56.005 13:25:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.005 13:25:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:56.005 13:25:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:56.005 13:25:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.005 13:25:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:56.005 13:25:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:56.005 13:25:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:56.005 13:25:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:56.005 13:25:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:56.261 "nvmf_tgt_1" 00:13:56.261 13:25:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:56.261 "nvmf_tgt_2" 00:13:56.261 13:25:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:56.261 13:25:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:56.518 13:25:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:56.518 13:25:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:56.518 true 00:13:56.518 13:25:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:56.518 true 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:56.776 rmmod nvme_tcp 00:13:56.776 rmmod nvme_fabrics 00:13:56.776 rmmod nvme_keyring 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 218909 ']' 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 218909 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 218909 ']' 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 218909 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 218909 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 218909' 00:13:56.776 killing process with pid 218909 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 218909 00:13:56.776 13:25:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 218909 00:13:58.150 13:25:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:58.150 13:25:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:58.150 13:25:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:58.150 13:25:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.150 13:25:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:58.150 13:25:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.150 13:25:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.150 13:25:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.056 13:25:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:00.056 00:14:00.056 real 0m7.335s 00:14:00.056 user 0m11.408s 00:14:00.056 sys 0m2.043s 00:14:00.056 13:25:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.056 13:25:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:00.056 ************************************ 00:14:00.056 END TEST nvmf_multitarget 00:14:00.056 ************************************ 00:14:00.314 13:25:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:00.314 13:25:34 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:00.314 13:25:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:00.314 13:25:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.314 13:25:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:00.314 ************************************ 00:14:00.314 START TEST nvmf_rpc 00:14:00.314 ************************************ 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:00.314 * Looking for test storage... 00:14:00.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.314 13:25:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:02.847 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:02.847 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.847 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.848 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.848 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.848 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.848 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.848 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.848 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.848 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.848 13:25:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:02.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:02.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:14:02.848 00:14:02.848 --- 10.0.0.2 ping statistics --- 00:14:02.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.848 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:14:02.848 00:14:02.848 --- 10.0.0.1 ping statistics --- 00:14:02.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.848 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=221239 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 221239 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 221239 ']' 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.848 13:25:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.848 [2024-07-13 13:25:37.251251] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:02.848 [2024-07-13 13:25:37.251416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.848 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.848 [2024-07-13 13:25:37.408695] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.107 [2024-07-13 13:25:37.676552] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.107 [2024-07-13 13:25:37.676633] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.107 [2024-07-13 13:25:37.676661] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.107 [2024-07-13 13:25:37.676682] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.107 [2024-07-13 13:25:37.676704] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.107 [2024-07-13 13:25:37.676854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.107 [2024-07-13 13:25:37.676915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.107 [2024-07-13 13:25:37.676950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.107 [2024-07-13 13:25:37.676961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:03.672 "tick_rate": 2700000000, 00:14:03.672 "poll_groups": [ 00:14:03.672 { 00:14:03.672 "name": "nvmf_tgt_poll_group_000", 00:14:03.672 "admin_qpairs": 0, 00:14:03.672 "io_qpairs": 0, 00:14:03.672 "current_admin_qpairs": 0, 00:14:03.672 "current_io_qpairs": 0, 00:14:03.672 "pending_bdev_io": 0, 00:14:03.672 "completed_nvme_io": 0, 00:14:03.672 "transports": [] 00:14:03.672 }, 00:14:03.672 { 00:14:03.672 "name": "nvmf_tgt_poll_group_001", 00:14:03.672 "admin_qpairs": 0, 00:14:03.672 "io_qpairs": 0, 00:14:03.672 "current_admin_qpairs": 0, 00:14:03.672 "current_io_qpairs": 0, 00:14:03.672 "pending_bdev_io": 0, 00:14:03.672 "completed_nvme_io": 0, 00:14:03.672 "transports": [] 00:14:03.672 }, 00:14:03.672 { 00:14:03.672 "name": "nvmf_tgt_poll_group_002", 00:14:03.672 "admin_qpairs": 0, 00:14:03.672 "io_qpairs": 0, 00:14:03.672 "current_admin_qpairs": 0, 00:14:03.672 "current_io_qpairs": 0, 00:14:03.672 "pending_bdev_io": 0, 00:14:03.672 "completed_nvme_io": 0, 00:14:03.672 "transports": [] 00:14:03.672 }, 00:14:03.672 { 00:14:03.672 "name": "nvmf_tgt_poll_group_003", 00:14:03.672 "admin_qpairs": 0, 00:14:03.672 "io_qpairs": 0, 00:14:03.672 "current_admin_qpairs": 0, 00:14:03.672 "current_io_qpairs": 0, 00:14:03.672 "pending_bdev_io": 0, 00:14:03.672 "completed_nvme_io": 0, 00:14:03.672 "transports": [] 00:14:03.672 } 00:14:03.672 ] 00:14:03.672 }' 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.672 [2024-07-13 13:25:38.334375] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:03.672 "tick_rate": 2700000000, 00:14:03.672 "poll_groups": [ 00:14:03.672 { 00:14:03.672 "name": "nvmf_tgt_poll_group_000", 00:14:03.672 "admin_qpairs": 0, 00:14:03.672 "io_qpairs": 0, 00:14:03.672 "current_admin_qpairs": 0, 00:14:03.672 "current_io_qpairs": 0, 00:14:03.672 "pending_bdev_io": 0, 00:14:03.672 "completed_nvme_io": 0, 00:14:03.672 "transports": [ 00:14:03.672 { 00:14:03.672 "trtype": "TCP" 00:14:03.672 } 00:14:03.672 ] 00:14:03.672 }, 00:14:03.672 { 00:14:03.672 "name": "nvmf_tgt_poll_group_001", 00:14:03.672 "admin_qpairs": 0, 00:14:03.672 "io_qpairs": 0, 00:14:03.672 "current_admin_qpairs": 0, 00:14:03.672 "current_io_qpairs": 0, 00:14:03.672 "pending_bdev_io": 0, 00:14:03.672 "completed_nvme_io": 0, 00:14:03.672 "transports": [ 00:14:03.672 { 00:14:03.672 "trtype": "TCP" 00:14:03.672 } 00:14:03.672 ] 00:14:03.672 }, 00:14:03.672 { 00:14:03.672 "name": "nvmf_tgt_poll_group_002", 00:14:03.672 "admin_qpairs": 0, 00:14:03.672 "io_qpairs": 0, 00:14:03.672 "current_admin_qpairs": 0, 00:14:03.672 "current_io_qpairs": 0, 00:14:03.672 "pending_bdev_io": 0, 00:14:03.672 "completed_nvme_io": 0, 00:14:03.672 "transports": [ 00:14:03.672 { 00:14:03.672 "trtype": "TCP" 00:14:03.672 } 00:14:03.672 ] 00:14:03.672 }, 00:14:03.672 { 00:14:03.672 "name": "nvmf_tgt_poll_group_003", 00:14:03.672 "admin_qpairs": 0, 00:14:03.672 "io_qpairs": 0, 00:14:03.672 "current_admin_qpairs": 0, 00:14:03.672 "current_io_qpairs": 0, 00:14:03.672 "pending_bdev_io": 0, 00:14:03.672 "completed_nvme_io": 0, 00:14:03.672 "transports": [ 00:14:03.672 { 00:14:03.672 "trtype": "TCP" 00:14:03.672 } 00:14:03.672 ] 00:14:03.672 } 00:14:03.672 ] 00:14:03.672 }' 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:03.672 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.930 Malloc1 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.930 [2024-07-13 13:25:38.538025] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:14:03.930 [2024-07-13 13:25:38.561182] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:14:03.930 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:03.930 could not add new controller: failed to write to nvme-fabrics device 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.930 13:25:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.920 13:25:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:04.920 13:25:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:04.920 13:25:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.920 13:25:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:04.920 13:25:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:06.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:06.816 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:06.817 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:06.817 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:06.817 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.817 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:06.817 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.817 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:06.817 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.817 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:06.817 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:06.817 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:06.817 [2024-07-13 13:25:41.546286] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:14:07.073 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:07.073 could not add new controller: failed to write to nvme-fabrics device 00:14:07.073 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:07.073 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:07.073 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:07.073 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:07.073 13:25:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:07.073 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.073 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.073 13:25:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.073 13:25:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:07.635 13:25:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:07.635 13:25:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:07.635 13:25:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:07.635 13:25:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:07.635 13:25:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:09.526 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:09.526 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:09.526 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:09.783 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:09.783 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:09.783 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:09.783 13:25:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:09.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.783 13:25:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:09.783 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:09.783 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:09.783 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.042 [2024-07-13 13:25:44.563888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.042 13:25:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.607 13:25:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:10.607 13:25:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:10.607 13:25:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.607 13:25:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:10.607 13:25:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:12.501 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:12.501 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:12.501 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:12.501 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:12.501 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:12.501 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:12.501 13:25:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:12.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.761 [2024-07-13 13:25:47.406394] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.761 13:25:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:13.327 13:25:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:13.327 13:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:13.327 13:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:13.327 13:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:13.327 13:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:15.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.856 [2024-07-13 13:25:50.298363] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.856 13:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:16.422 13:25:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:16.422 13:25:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:16.422 13:25:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.422 13:25:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:16.422 13:25:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:18.335 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:18.335 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:18.335 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:18.335 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:18.335 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:18.335 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:18.335 13:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.597 [2024-07-13 13:25:53.217257] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.597 13:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:19.161 13:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:19.161 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:19.161 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.161 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:19.161 13:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:21.684 13:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:21.684 13:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:21.684 13:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.684 13:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:21.684 13:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.684 13:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:21.684 13:25:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.684 [2024-07-13 13:25:56.110140] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.684 13:25:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:22.248 13:25:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:22.248 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:22.248 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:22.248 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:22.248 13:25:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:24.199 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:24.199 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:24.199 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:24.199 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:24.199 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:24.199 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:24.199 13:25:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:24.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:24.457 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.458 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 [2024-07-13 13:25:58.993149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.458 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:24.458 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 [2024-07-13 13:25:59.041227] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 [2024-07-13 13:25:59.089408] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 [2024-07-13 13:25:59.137563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 [2024-07-13 13:25:59.185749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.458 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.716 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.716 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.716 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.716 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.716 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.716 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.716 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.716 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.716 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.716 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:24.717 "tick_rate": 2700000000, 00:14:24.717 "poll_groups": [ 00:14:24.717 { 00:14:24.717 "name": "nvmf_tgt_poll_group_000", 00:14:24.717 "admin_qpairs": 2, 00:14:24.717 "io_qpairs": 84, 00:14:24.717 "current_admin_qpairs": 0, 00:14:24.717 "current_io_qpairs": 0, 00:14:24.717 "pending_bdev_io": 0, 00:14:24.717 "completed_nvme_io": 85, 00:14:24.717 "transports": [ 00:14:24.717 { 00:14:24.717 "trtype": "TCP" 00:14:24.717 } 00:14:24.717 ] 00:14:24.717 }, 00:14:24.717 { 00:14:24.717 "name": "nvmf_tgt_poll_group_001", 00:14:24.717 "admin_qpairs": 2, 00:14:24.717 "io_qpairs": 84, 00:14:24.717 "current_admin_qpairs": 0, 00:14:24.717 "current_io_qpairs": 0, 00:14:24.717 "pending_bdev_io": 0, 00:14:24.717 "completed_nvme_io": 171, 00:14:24.717 "transports": [ 00:14:24.717 { 00:14:24.717 "trtype": "TCP" 00:14:24.717 } 00:14:24.717 ] 00:14:24.717 }, 00:14:24.717 { 00:14:24.717 "name": "nvmf_tgt_poll_group_002", 00:14:24.717 "admin_qpairs": 1, 00:14:24.717 "io_qpairs": 84, 00:14:24.717 "current_admin_qpairs": 0, 00:14:24.717 "current_io_qpairs": 0, 00:14:24.717 "pending_bdev_io": 0, 00:14:24.717 "completed_nvme_io": 149, 00:14:24.717 "transports": [ 00:14:24.717 { 00:14:24.717 "trtype": "TCP" 00:14:24.717 } 00:14:24.717 ] 00:14:24.717 }, 00:14:24.717 { 00:14:24.717 "name": "nvmf_tgt_poll_group_003", 00:14:24.717 "admin_qpairs": 2, 00:14:24.717 "io_qpairs": 84, 00:14:24.717 "current_admin_qpairs": 0, 00:14:24.717 "current_io_qpairs": 0, 00:14:24.717 "pending_bdev_io": 0, 00:14:24.717 "completed_nvme_io": 281, 00:14:24.717 "transports": [ 00:14:24.717 { 00:14:24.717 "trtype": "TCP" 00:14:24.717 } 00:14:24.717 ] 00:14:24.717 } 00:14:24.717 ] 00:14:24.717 }' 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:24.717 rmmod nvme_tcp 00:14:24.717 rmmod nvme_fabrics 00:14:24.717 rmmod nvme_keyring 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 221239 ']' 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 221239 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 221239 ']' 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 221239 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 221239 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 221239' 00:14:24.717 killing process with pid 221239 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 221239 00:14:24.717 13:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 221239 00:14:26.090 13:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:26.090 13:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:26.090 13:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:26.090 13:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.090 13:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:26.091 13:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.091 13:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.091 13:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.624 13:26:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:28.624 00:14:28.624 real 0m27.990s 00:14:28.624 user 1m30.239s 00:14:28.624 sys 0m4.384s 00:14:28.624 13:26:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:28.624 13:26:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.624 ************************************ 00:14:28.624 END TEST nvmf_rpc 00:14:28.624 ************************************ 00:14:28.624 13:26:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:28.624 13:26:02 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:28.624 13:26:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:28.624 13:26:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.624 13:26:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:28.624 ************************************ 00:14:28.624 START TEST nvmf_invalid 00:14:28.624 ************************************ 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:28.624 * Looking for test storage... 00:14:28.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:14:28.624 13:26:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.521 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:30.522 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:30.522 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:30.522 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:30.522 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.522 13:26:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:30.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:14:30.522 00:14:30.522 --- 10.0.0.2 ping statistics --- 00:14:30.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.522 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:30.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:14:30.522 00:14:30.522 --- 10.0.0.1 ping statistics --- 00:14:30.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.522 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=226115 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 226115 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 226115 ']' 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:30.522 13:26:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:30.522 [2024-07-13 13:26:05.193110] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:30.522 [2024-07-13 13:26:05.193242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.780 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.780 [2024-07-13 13:26:05.323418] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.038 [2024-07-13 13:26:05.580301] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.038 [2024-07-13 13:26:05.580383] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.038 [2024-07-13 13:26:05.580411] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.038 [2024-07-13 13:26:05.580432] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.038 [2024-07-13 13:26:05.580454] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.038 [2024-07-13 13:26:05.580591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.038 [2024-07-13 13:26:05.580652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.038 [2024-07-13 13:26:05.580699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.038 [2024-07-13 13:26:05.580710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.604 13:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.604 13:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:14:31.604 13:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.604 13:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:31.604 13:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:31.604 13:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.604 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:31.604 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7713 00:14:31.862 [2024-07-13 13:26:06.440048] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:31.862 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:31.862 { 00:14:31.862 "nqn": "nqn.2016-06.io.spdk:cnode7713", 00:14:31.862 "tgt_name": "foobar", 00:14:31.862 "method": "nvmf_create_subsystem", 00:14:31.862 "req_id": 1 00:14:31.862 } 00:14:31.862 Got JSON-RPC error response 00:14:31.862 response: 00:14:31.862 { 00:14:31.862 "code": -32603, 00:14:31.862 "message": "Unable to find target foobar" 00:14:31.862 }' 00:14:31.862 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:31.862 { 00:14:31.862 "nqn": "nqn.2016-06.io.spdk:cnode7713", 00:14:31.862 "tgt_name": "foobar", 00:14:31.862 "method": "nvmf_create_subsystem", 00:14:31.862 "req_id": 1 00:14:31.862 } 00:14:31.862 Got JSON-RPC error response 00:14:31.862 response: 00:14:31.862 { 00:14:31.862 "code": -32603, 00:14:31.862 "message": "Unable to find target foobar" 00:14:31.862 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:31.862 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:31.862 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8294 00:14:32.120 [2024-07-13 13:26:06.680970] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8294: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:32.120 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:32.120 { 00:14:32.120 "nqn": "nqn.2016-06.io.spdk:cnode8294", 00:14:32.120 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:32.120 "method": "nvmf_create_subsystem", 00:14:32.120 "req_id": 1 00:14:32.120 } 00:14:32.120 Got JSON-RPC error response 00:14:32.120 response: 00:14:32.120 { 00:14:32.120 "code": -32602, 00:14:32.120 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:32.120 }' 00:14:32.120 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:32.120 { 00:14:32.120 "nqn": "nqn.2016-06.io.spdk:cnode8294", 00:14:32.120 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:32.120 "method": "nvmf_create_subsystem", 00:14:32.120 "req_id": 1 00:14:32.120 } 00:14:32.120 Got JSON-RPC error response 00:14:32.120 response: 00:14:32.120 { 00:14:32.120 "code": -32602, 00:14:32.120 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:32.120 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:32.120 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:32.120 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7978 00:14:32.378 [2024-07-13 13:26:06.933771] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7978: invalid model number 'SPDK_Controller' 00:14:32.378 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:32.378 { 00:14:32.378 "nqn": "nqn.2016-06.io.spdk:cnode7978", 00:14:32.378 "model_number": "SPDK_Controller\u001f", 00:14:32.378 "method": "nvmf_create_subsystem", 00:14:32.378 "req_id": 1 00:14:32.378 } 00:14:32.378 Got JSON-RPC error response 00:14:32.378 response: 00:14:32.378 { 00:14:32.378 "code": -32602, 00:14:32.378 "message": "Invalid MN SPDK_Controller\u001f" 00:14:32.378 }' 00:14:32.378 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:32.378 { 00:14:32.378 "nqn": "nqn.2016-06.io.spdk:cnode7978", 00:14:32.378 "model_number": "SPDK_Controller\u001f", 00:14:32.378 "method": "nvmf_create_subsystem", 00:14:32.378 "req_id": 1 00:14:32.378 } 00:14:32.378 Got JSON-RPC error response 00:14:32.378 response: 00:14:32.378 { 00:14:32.378 "code": -32602, 00:14:32.378 "message": "Invalid MN SPDK_Controller\u001f" 00:14:32.378 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:32.378 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:32.378 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:32.378 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:32.378 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:32.378 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:32.378 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:32.378 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.378 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:32.378 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:32.378 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:32.378 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ , == \- ]] 00:14:32.379 13:26:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ',WG4EMgu]FR; /dev/null' 00:14:36.591 13:26:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.495 13:26:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:38.495 00:14:38.495 real 0m10.265s 00:14:38.495 user 0m24.768s 00:14:38.495 sys 0m2.624s 00:14:38.495 13:26:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:38.495 13:26:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:38.495 ************************************ 00:14:38.495 END TEST nvmf_invalid 00:14:38.495 ************************************ 00:14:38.495 13:26:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:38.495 13:26:13 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:38.495 13:26:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:38.495 13:26:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:38.495 13:26:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:38.495 ************************************ 00:14:38.495 START TEST nvmf_abort 00:14:38.495 ************************************ 00:14:38.495 13:26:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:38.495 * Looking for test storage... 00:14:38.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:14:38.754 13:26:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:40.654 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:40.655 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:40.655 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:40.655 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:40.655 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:40.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:14:40.655 00:14:40.655 --- 10.0.0.2 ping statistics --- 00:14:40.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.655 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:40.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:14:40.655 00:14:40.655 --- 10.0.0.1 ping statistics --- 00:14:40.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.655 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=228885 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 228885 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 228885 ']' 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.655 13:26:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:40.914 [2024-07-13 13:26:15.410058] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:40.914 [2024-07-13 13:26:15.410190] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.914 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.914 [2024-07-13 13:26:15.543021] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:41.202 [2024-07-13 13:26:15.774430] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.202 [2024-07-13 13:26:15.774512] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.202 [2024-07-13 13:26:15.774546] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.202 [2024-07-13 13:26:15.774568] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.202 [2024-07-13 13:26:15.774590] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.202 [2024-07-13 13:26:15.774721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.202 [2024-07-13 13:26:15.774771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.202 [2024-07-13 13:26:15.774782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:41.769 [2024-07-13 13:26:16.356222] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:41.769 Malloc0 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:41.769 Delay0 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:41.769 [2024-07-13 13:26:16.472089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.769 13:26:16 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:42.027 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.027 [2024-07-13 13:26:16.680101] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:44.558 Initializing NVMe Controllers 00:14:44.558 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:44.558 controller IO queue size 128 less than required 00:14:44.558 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:44.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:44.558 Initialization complete. Launching workers. 00:14:44.558 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 25259 00:14:44.558 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 25316, failed to submit 66 00:14:44.558 success 25259, unsuccess 57, failed 0 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:44.558 rmmod nvme_tcp 00:14:44.558 rmmod nvme_fabrics 00:14:44.558 rmmod nvme_keyring 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 228885 ']' 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 228885 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 228885 ']' 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 228885 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:14:44.558 13:26:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.558 13:26:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 228885 00:14:44.558 13:26:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:44.558 13:26:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:44.558 13:26:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 228885' 00:14:44.558 killing process with pid 228885 00:14:44.558 13:26:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 228885 00:14:44.558 13:26:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 228885 00:14:45.933 13:26:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:45.933 13:26:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:45.933 13:26:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:45.933 13:26:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.933 13:26:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:45.933 13:26:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.933 13:26:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.933 13:26:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.837 13:26:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:47.837 00:14:47.837 real 0m9.284s 00:14:47.837 user 0m15.547s 00:14:47.837 sys 0m2.774s 00:14:47.837 13:26:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:47.837 13:26:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:47.837 ************************************ 00:14:47.837 END TEST nvmf_abort 00:14:47.837 ************************************ 00:14:47.837 13:26:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:47.837 13:26:22 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:47.837 13:26:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:47.837 13:26:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.837 13:26:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:47.837 ************************************ 00:14:47.837 START TEST nvmf_ns_hotplug_stress 00:14:47.837 ************************************ 00:14:47.837 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:47.837 * Looking for test storage... 00:14:48.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.095 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.095 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:14:48.095 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.095 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.095 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.095 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.095 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:48.096 13:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:49.998 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:49.998 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:49.998 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:49.998 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.998 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:49.999 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.999 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.999 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:49.999 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.999 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.999 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:49.999 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:49.999 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.999 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.999 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.999 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.999 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:49.999 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.999 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.999 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.999 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:50.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:14:50.256 00:14:50.256 --- 10.0.0.2 ping statistics --- 00:14:50.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.256 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:14:50.256 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:50.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:14:50.256 00:14:50.256 --- 10.0.0.1 ping statistics --- 00:14:50.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.256 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:14:50.256 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.256 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:14:50.256 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:50.256 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.256 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:50.256 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:50.256 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.256 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:50.256 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:50.256 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:50.256 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.256 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:50.256 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.256 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=231371 00:14:50.257 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:50.257 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 231371 00:14:50.257 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 231371 ']' 00:14:50.257 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.257 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.257 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.257 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.257 13:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.257 [2024-07-13 13:26:24.869442] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:50.257 [2024-07-13 13:26:24.869580] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.257 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.514 [2024-07-13 13:26:25.007091] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:50.771 [2024-07-13 13:26:25.273001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.771 [2024-07-13 13:26:25.273069] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.771 [2024-07-13 13:26:25.273100] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.771 [2024-07-13 13:26:25.273120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.771 [2024-07-13 13:26:25.273154] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.771 [2024-07-13 13:26:25.273296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.771 [2024-07-13 13:26:25.273373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.771 [2024-07-13 13:26:25.273383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.335 13:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:51.335 13:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:14:51.335 13:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.335 13:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:51.335 13:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.335 13:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.335 13:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:51.335 13:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:51.592 [2024-07-13 13:26:26.164439] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.592 13:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:51.849 13:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.105 [2024-07-13 13:26:26.690146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.105 13:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:52.362 13:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:52.619 Malloc0 00:14:52.619 13:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:52.876 Delay0 00:14:52.876 13:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.133 13:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:53.389 NULL1 00:14:53.389 13:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:53.646 13:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=231801 00:14:53.647 13:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:53.647 13:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:14:53.647 13:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.647 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.904 13:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.161 13:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:54.161 13:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:54.418 true 00:14:54.418 13:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:14:54.418 13:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.675 13:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.933 13:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:54.933 13:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:55.191 true 00:14:55.191 13:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:14:55.191 13:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.128 Read completed with error (sct=0, sc=11) 00:14:56.128 13:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:56.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:56.387 13:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:56.387 13:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:56.387 true 00:14:56.387 13:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:14:56.387 13:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.645 13:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.903 13:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:56.903 13:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:57.161 true 00:14:57.161 13:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:14:57.161 13:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.164 13:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:58.473 13:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:58.473 13:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:58.730 true 00:14:58.730 13:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:14:58.730 13:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.989 13:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.989 13:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:58.989 13:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:59.248 true 00:14:59.248 13:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:14:59.248 13:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.182 13:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.440 13:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:00.440 13:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:00.698 true 00:15:00.698 13:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:00.698 13:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.956 13:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:01.214 13:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:01.214 13:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:01.472 true 00:15:01.472 13:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:01.472 13:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.405 13:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:02.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:02.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:02.663 13:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:02.663 13:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:02.921 true 00:15:02.921 13:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:02.921 13:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.179 13:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:03.438 13:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:03.438 13:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:03.697 true 00:15:03.697 13:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:03.697 13:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:04.631 13:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.631 13:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:04.631 13:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:04.889 true 00:15:04.889 13:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:04.889 13:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.147 13:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:05.405 13:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:05.405 13:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:05.664 true 00:15:05.664 13:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:05.664 13:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.598 13:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.856 13:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:06.856 13:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:07.114 true 00:15:07.114 13:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:07.114 13:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.372 13:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:07.630 13:26:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:07.630 13:26:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:07.889 true 00:15:07.889 13:26:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:07.889 13:26:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:08.822 13:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:08.822 13:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:08.822 13:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:09.080 true 00:15:09.080 13:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:09.080 13:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.337 13:26:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.595 13:26:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:09.595 13:26:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:09.853 true 00:15:09.853 13:26:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:09.853 13:26:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.787 13:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:11.044 13:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:11.044 13:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:11.302 true 00:15:11.302 13:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:11.302 13:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.560 13:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:11.818 13:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:11.818 13:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:12.075 true 00:15:12.075 13:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:12.075 13:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.036 13:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:13.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:13.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:13.294 13:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:13.294 13:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:13.294 true 00:15:13.294 13:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:13.294 13:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.551 13:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:13.809 13:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:13.809 13:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:14.067 true 00:15:14.067 13:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:14.067 13:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.998 13:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:14.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:15.256 13:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:15.256 13:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:15.514 true 00:15:15.514 13:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:15.514 13:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.772 13:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:16.029 13:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:16.029 13:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:16.286 true 00:15:16.286 13:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:16.286 13:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.218 13:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:17.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:17.476 13:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:17.476 13:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:17.733 true 00:15:17.991 13:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:17.991 13:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.991 13:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:18.255 13:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:18.255 13:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:18.512 true 00:15:18.512 13:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:18.512 13:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:18.769 13:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:19.026 13:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:19.026 13:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:19.282 true 00:15:19.282 13:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:19.282 13:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.655 13:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:20.655 13:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:20.655 13:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:20.912 true 00:15:20.912 13:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:20.912 13:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.169 13:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:21.427 13:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:15:21.427 13:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:21.685 true 00:15:21.685 13:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:21.685 13:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:22.618 13:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:22.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:22.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:22.618 13:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:15:22.618 13:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:22.876 true 00:15:22.876 13:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:22.876 13:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:23.134 13:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:23.391 13:26:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:15:23.391 13:26:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:15:23.649 true 00:15:23.649 13:26:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:23.649 13:26:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.594 Initializing NVMe Controllers 00:15:24.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:24.594 Controller IO queue size 128, less than required. 00:15:24.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:24.594 Controller IO queue size 128, less than required. 00:15:24.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:24.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:24.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:24.594 Initialization complete. Launching workers. 00:15:24.594 ======================================================== 00:15:24.594 Latency(us) 00:15:24.594 Device Information : IOPS MiB/s Average min max 00:15:24.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 588.60 0.29 113406.56 2959.11 1107494.99 00:15:24.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7864.73 3.84 16223.36 4826.71 387371.85 00:15:24.594 ======================================================== 00:15:24.594 Total : 8453.33 4.13 22990.16 2959.11 1107494.99 00:15:24.594 00:15:24.594 13:26:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:24.854 13:26:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:15:24.854 13:26:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:15:25.111 true 00:15:25.111 13:26:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 231801 00:15:25.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (231801) - No such process 00:15:25.111 13:26:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 231801 00:15:25.111 13:26:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.369 13:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:25.625 13:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:25.625 13:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:25.625 13:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:25.625 13:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:25.625 13:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:25.883 null0 00:15:25.883 13:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:25.883 13:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:25.883 13:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:26.139 null1 00:15:26.139 13:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:26.139 13:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:26.139 13:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:26.396 null2 00:15:26.397 13:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:26.397 13:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:26.397 13:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:26.654 null3 00:15:26.654 13:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:26.654 13:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:26.654 13:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:26.912 null4 00:15:26.912 13:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:26.912 13:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:26.912 13:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:27.169 null5 00:15:27.169 13:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:27.169 13:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:27.169 13:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:27.426 null6 00:15:27.426 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:27.426 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:27.426 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:27.694 null7 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:27.694 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:27.695 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:27.695 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:27.695 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:27.695 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:27.695 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:27.695 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:27.695 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:27.695 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:27.695 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:27.695 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:27.695 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 236075 236076 236077 236080 236082 236084 236086 236088 00:15:27.695 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:27.695 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:27.695 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:27.695 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:28.008 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:28.008 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:28.008 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.008 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:28.008 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:28.008 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:28.008 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:28.008 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.275 13:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:28.532 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:28.532 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:28.532 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.532 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:28.532 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:28.532 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:28.532 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:28.532 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:28.790 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:29.048 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:29.048 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:29.048 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:29.048 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:29.048 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:29.048 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.048 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:29.048 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.305 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.306 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:29.306 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.306 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.306 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:29.306 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.306 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.306 13:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:29.564 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:29.564 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:29.564 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:29.564 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:29.564 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:29.564 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:29.564 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.564 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:29.821 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:30.078 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:30.078 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:30.078 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:30.078 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:30.078 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.078 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:30.336 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:30.336 13:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:30.336 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.336 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.336 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:30.336 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.336 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:30.594 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:30.851 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:30.851 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:30.851 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:30.851 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:30.851 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.851 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:30.851 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:30.851 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:31.108 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.108 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.108 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:31.108 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.108 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.108 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:31.108 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.108 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.109 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:31.109 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.109 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.109 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:31.109 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.109 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.109 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:31.109 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.109 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.109 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:31.109 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.109 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.109 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:31.109 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.109 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.109 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:31.366 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:31.366 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:31.366 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:31.366 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.366 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:31.366 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:31.366 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:31.366 13:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:31.623 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:31.880 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:31.880 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:31.880 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:31.880 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.880 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:31.880 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:31.880 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:31.880 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.138 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:32.395 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:32.395 13:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:32.395 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:32.395 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.395 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:32.395 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:32.395 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:32.395 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:32.653 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:32.654 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:32.654 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:32.654 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:32.912 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:32.912 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:32.912 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.912 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:32.912 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:32.912 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:32.912 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:32.912 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.171 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:33.171 rmmod nvme_tcp 00:15:33.171 rmmod nvme_fabrics 00:15:33.171 rmmod nvme_keyring 00:15:33.429 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.429 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:15:33.429 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:15:33.429 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 231371 ']' 00:15:33.429 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 231371 00:15:33.429 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 231371 ']' 00:15:33.429 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 231371 00:15:33.429 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:15:33.429 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:33.429 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 231371 00:15:33.429 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:33.429 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:33.429 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 231371' 00:15:33.429 killing process with pid 231371 00:15:33.429 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 231371 00:15:33.429 13:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 231371 00:15:34.803 13:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.803 13:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.803 13:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.803 13:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.803 13:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.803 13:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.803 13:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.803 13:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.704 13:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:36.704 00:15:36.704 real 0m48.771s 00:15:36.704 user 3m36.969s 00:15:36.704 sys 0m16.931s 00:15:36.705 13:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:36.705 13:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.705 ************************************ 00:15:36.705 END TEST nvmf_ns_hotplug_stress 00:15:36.705 ************************************ 00:15:36.705 13:27:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:36.705 13:27:11 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:36.705 13:27:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:36.705 13:27:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:36.705 13:27:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:36.705 ************************************ 00:15:36.705 START TEST nvmf_connect_stress 00:15:36.705 ************************************ 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:36.705 * Looking for test storage... 00:15:36.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:36.705 13:27:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.603 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:38.604 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:38.604 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:38.604 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:38.604 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:38.604 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:38.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:15:38.862 00:15:38.862 --- 10.0.0.2 ping statistics --- 00:15:38.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.862 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:38.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:15:38.862 00:15:38.862 --- 10.0.0.1 ping statistics --- 00:15:38.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.862 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=239463 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 239463 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 239463 ']' 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.862 13:27:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.862 [2024-07-13 13:27:13.484528] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:38.862 [2024-07-13 13:27:13.484669] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.862 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.120 [2024-07-13 13:27:13.629649] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:39.378 [2024-07-13 13:27:13.886445] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.378 [2024-07-13 13:27:13.886533] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.378 [2024-07-13 13:27:13.886568] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.378 [2024-07-13 13:27:13.886588] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.378 [2024-07-13 13:27:13.886610] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.378 [2024-07-13 13:27:13.886751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.378 [2024-07-13 13:27:13.886843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.378 [2024-07-13 13:27:13.886863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.943 [2024-07-13 13:27:14.427829] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.943 [2024-07-13 13:27:14.454381] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.943 NULL1 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=239618 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.943 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.943 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.201 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.201 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:40.201 13:27:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.201 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.201 13:27:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.459 13:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.459 13:27:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:40.459 13:27:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.459 13:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.459 13:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.024 13:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.024 13:27:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:41.024 13:27:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.024 13:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.024 13:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.282 13:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.282 13:27:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:41.282 13:27:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.282 13:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.282 13:27:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.540 13:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.540 13:27:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:41.540 13:27:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.540 13:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.540 13:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.798 13:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.798 13:27:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:41.798 13:27:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.798 13:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.798 13:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.056 13:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.056 13:27:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:42.056 13:27:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.056 13:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.056 13:27:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.652 13:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.652 13:27:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:42.652 13:27:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.652 13:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.652 13:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.910 13:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.910 13:27:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:42.910 13:27:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.910 13:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.910 13:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.168 13:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.168 13:27:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:43.168 13:27:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.168 13:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.168 13:27:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.426 13:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.426 13:27:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:43.426 13:27:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.426 13:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.426 13:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.684 13:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.684 13:27:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:43.684 13:27:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.684 13:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.684 13:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.249 13:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.249 13:27:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:44.249 13:27:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.249 13:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.249 13:27:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.507 13:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.507 13:27:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:44.507 13:27:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.507 13:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.507 13:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.764 13:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.764 13:27:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:44.764 13:27:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.764 13:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.764 13:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.022 13:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.022 13:27:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:45.022 13:27:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.022 13:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.022 13:27:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.585 13:27:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.585 13:27:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:45.585 13:27:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.585 13:27:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.585 13:27:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.842 13:27:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.842 13:27:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:45.842 13:27:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.842 13:27:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.842 13:27:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.099 13:27:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.099 13:27:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:46.099 13:27:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.099 13:27:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.099 13:27:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.356 13:27:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.356 13:27:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:46.356 13:27:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.356 13:27:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.356 13:27:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:46.613 13:27:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.613 13:27:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:46.613 13:27:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.613 13:27:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.613 13:27:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.178 13:27:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.178 13:27:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:47.178 13:27:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.178 13:27:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.178 13:27:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.436 13:27:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.436 13:27:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:47.436 13:27:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.436 13:27:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.436 13:27:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.694 13:27:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.694 13:27:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:47.694 13:27:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.694 13:27:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.694 13:27:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.952 13:27:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.952 13:27:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:47.952 13:27:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.952 13:27:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.952 13:27:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.210 13:27:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.210 13:27:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:48.210 13:27:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.210 13:27:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.210 13:27:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:48.775 13:27:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.775 13:27:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:48.775 13:27:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.775 13:27:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.775 13:27:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.033 13:27:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.033 13:27:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:49.033 13:27:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.033 13:27:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.033 13:27:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.290 13:27:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.290 13:27:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:49.290 13:27:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.290 13:27:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.290 13:27:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:49.547 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.547 13:27:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:49.547 13:27:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.547 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.547 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.111 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.112 13:27:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:50.112 13:27:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.112 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.112 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.112 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 239618 00:15:50.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (239618) - No such process 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 239618 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:50.369 rmmod nvme_tcp 00:15:50.369 rmmod nvme_fabrics 00:15:50.369 rmmod nvme_keyring 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 239463 ']' 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 239463 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 239463 ']' 00:15:50.369 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 239463 00:15:50.370 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:15:50.370 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:50.370 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 239463 00:15:50.370 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:50.370 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:50.370 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 239463' 00:15:50.370 killing process with pid 239463 00:15:50.370 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 239463 00:15:50.370 13:27:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 239463 00:15:51.744 13:27:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:51.744 13:27:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:51.744 13:27:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:51.744 13:27:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.744 13:27:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:51.744 13:27:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.744 13:27:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.744 13:27:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.649 13:27:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:53.649 00:15:53.649 real 0m16.923s 00:15:53.649 user 0m42.271s 00:15:53.649 sys 0m5.821s 00:15:53.649 13:27:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.649 13:27:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.649 ************************************ 00:15:53.649 END TEST nvmf_connect_stress 00:15:53.649 ************************************ 00:15:53.649 13:27:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:53.649 13:27:28 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:53.649 13:27:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:53.649 13:27:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.649 13:27:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:53.649 ************************************ 00:15:53.649 START TEST nvmf_fused_ordering 00:15:53.649 ************************************ 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:53.649 * Looking for test storage... 00:15:53.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.649 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:53.650 13:27:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:55.554 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.554 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:55.554 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:55.554 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:55.554 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:55.554 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:55.554 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:55.554 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:55.554 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:55.813 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:55.813 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:55.813 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:55.814 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:55.814 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:55.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:15:55.814 00:15:55.814 --- 10.0.0.2 ping statistics --- 00:15:55.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.814 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:15:55.814 00:15:55.814 --- 10.0.0.1 ping statistics --- 00:15:55.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.814 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=242897 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 242897 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 242897 ']' 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.814 13:27:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:55.814 [2024-07-13 13:27:30.551351] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:55.814 [2024-07-13 13:27:30.551488] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.072 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.072 [2024-07-13 13:27:30.688484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.329 [2024-07-13 13:27:30.947064] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.329 [2024-07-13 13:27:30.947147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.329 [2024-07-13 13:27:30.947175] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.329 [2024-07-13 13:27:30.947200] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.329 [2024-07-13 13:27:30.947221] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.329 [2024-07-13 13:27:30.947277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.896 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.896 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:15:56.896 13:27:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:56.897 [2024-07-13 13:27:31.477384] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:56.897 [2024-07-13 13:27:31.493617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:56.897 NULL1 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.897 13:27:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:56.897 [2024-07-13 13:27:31.564238] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:56.897 [2024-07-13 13:27:31.564335] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243045 ] 00:15:56.897 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.464 Attached to nqn.2016-06.io.spdk:cnode1 00:15:57.464 Namespace ID: 1 size: 1GB 00:15:57.464 fused_ordering(0) 00:15:57.464 fused_ordering(1) 00:15:57.464 fused_ordering(2) 00:15:57.464 fused_ordering(3) 00:15:57.464 fused_ordering(4) 00:15:57.464 fused_ordering(5) 00:15:57.464 fused_ordering(6) 00:15:57.464 fused_ordering(7) 00:15:57.464 fused_ordering(8) 00:15:57.464 fused_ordering(9) 00:15:57.464 fused_ordering(10) 00:15:57.464 fused_ordering(11) 00:15:57.464 fused_ordering(12) 00:15:57.464 fused_ordering(13) 00:15:57.464 fused_ordering(14) 00:15:57.464 fused_ordering(15) 00:15:57.464 fused_ordering(16) 00:15:57.464 fused_ordering(17) 00:15:57.464 fused_ordering(18) 00:15:57.464 fused_ordering(19) 00:15:57.464 fused_ordering(20) 00:15:57.464 fused_ordering(21) 00:15:57.464 fused_ordering(22) 00:15:57.464 fused_ordering(23) 00:15:57.464 fused_ordering(24) 00:15:57.464 fused_ordering(25) 00:15:57.464 fused_ordering(26) 00:15:57.464 fused_ordering(27) 00:15:57.464 fused_ordering(28) 00:15:57.464 fused_ordering(29) 00:15:57.464 fused_ordering(30) 00:15:57.464 fused_ordering(31) 00:15:57.464 fused_ordering(32) 00:15:57.464 fused_ordering(33) 00:15:57.464 fused_ordering(34) 00:15:57.464 fused_ordering(35) 00:15:57.464 fused_ordering(36) 00:15:57.464 fused_ordering(37) 00:15:57.464 fused_ordering(38) 00:15:57.464 fused_ordering(39) 00:15:57.464 fused_ordering(40) 00:15:57.464 fused_ordering(41) 00:15:57.464 fused_ordering(42) 00:15:57.464 fused_ordering(43) 00:15:57.464 fused_ordering(44) 00:15:57.464 fused_ordering(45) 00:15:57.464 fused_ordering(46) 00:15:57.464 fused_ordering(47) 00:15:57.464 fused_ordering(48) 00:15:57.465 fused_ordering(49) 00:15:57.465 fused_ordering(50) 00:15:57.465 fused_ordering(51) 00:15:57.465 fused_ordering(52) 00:15:57.465 fused_ordering(53) 00:15:57.465 fused_ordering(54) 00:15:57.465 fused_ordering(55) 00:15:57.465 fused_ordering(56) 00:15:57.465 fused_ordering(57) 00:15:57.465 fused_ordering(58) 00:15:57.465 fused_ordering(59) 00:15:57.465 fused_ordering(60) 00:15:57.465 fused_ordering(61) 00:15:57.465 fused_ordering(62) 00:15:57.465 fused_ordering(63) 00:15:57.465 fused_ordering(64) 00:15:57.465 fused_ordering(65) 00:15:57.465 fused_ordering(66) 00:15:57.465 fused_ordering(67) 00:15:57.465 fused_ordering(68) 00:15:57.465 fused_ordering(69) 00:15:57.465 fused_ordering(70) 00:15:57.465 fused_ordering(71) 00:15:57.465 fused_ordering(72) 00:15:57.465 fused_ordering(73) 00:15:57.465 fused_ordering(74) 00:15:57.465 fused_ordering(75) 00:15:57.465 fused_ordering(76) 00:15:57.465 fused_ordering(77) 00:15:57.465 fused_ordering(78) 00:15:57.465 fused_ordering(79) 00:15:57.465 fused_ordering(80) 00:15:57.465 fused_ordering(81) 00:15:57.465 fused_ordering(82) 00:15:57.465 fused_ordering(83) 00:15:57.465 fused_ordering(84) 00:15:57.465 fused_ordering(85) 00:15:57.465 fused_ordering(86) 00:15:57.465 fused_ordering(87) 00:15:57.465 fused_ordering(88) 00:15:57.465 fused_ordering(89) 00:15:57.465 fused_ordering(90) 00:15:57.465 fused_ordering(91) 00:15:57.465 fused_ordering(92) 00:15:57.465 fused_ordering(93) 00:15:57.465 fused_ordering(94) 00:15:57.465 fused_ordering(95) 00:15:57.465 fused_ordering(96) 00:15:57.465 fused_ordering(97) 00:15:57.465 fused_ordering(98) 00:15:57.465 fused_ordering(99) 00:15:57.465 fused_ordering(100) 00:15:57.465 fused_ordering(101) 00:15:57.465 fused_ordering(102) 00:15:57.465 fused_ordering(103) 00:15:57.465 fused_ordering(104) 00:15:57.465 fused_ordering(105) 00:15:57.465 fused_ordering(106) 00:15:57.465 fused_ordering(107) 00:15:57.465 fused_ordering(108) 00:15:57.465 fused_ordering(109) 00:15:57.465 fused_ordering(110) 00:15:57.465 fused_ordering(111) 00:15:57.465 fused_ordering(112) 00:15:57.465 fused_ordering(113) 00:15:57.465 fused_ordering(114) 00:15:57.465 fused_ordering(115) 00:15:57.465 fused_ordering(116) 00:15:57.465 fused_ordering(117) 00:15:57.465 fused_ordering(118) 00:15:57.465 fused_ordering(119) 00:15:57.465 fused_ordering(120) 00:15:57.465 fused_ordering(121) 00:15:57.465 fused_ordering(122) 00:15:57.465 fused_ordering(123) 00:15:57.465 fused_ordering(124) 00:15:57.465 fused_ordering(125) 00:15:57.465 fused_ordering(126) 00:15:57.465 fused_ordering(127) 00:15:57.465 fused_ordering(128) 00:15:57.465 fused_ordering(129) 00:15:57.465 fused_ordering(130) 00:15:57.465 fused_ordering(131) 00:15:57.465 fused_ordering(132) 00:15:57.465 fused_ordering(133) 00:15:57.465 fused_ordering(134) 00:15:57.465 fused_ordering(135) 00:15:57.465 fused_ordering(136) 00:15:57.465 fused_ordering(137) 00:15:57.465 fused_ordering(138) 00:15:57.465 fused_ordering(139) 00:15:57.465 fused_ordering(140) 00:15:57.465 fused_ordering(141) 00:15:57.465 fused_ordering(142) 00:15:57.465 fused_ordering(143) 00:15:57.465 fused_ordering(144) 00:15:57.465 fused_ordering(145) 00:15:57.465 fused_ordering(146) 00:15:57.465 fused_ordering(147) 00:15:57.465 fused_ordering(148) 00:15:57.465 fused_ordering(149) 00:15:57.465 fused_ordering(150) 00:15:57.465 fused_ordering(151) 00:15:57.465 fused_ordering(152) 00:15:57.465 fused_ordering(153) 00:15:57.465 fused_ordering(154) 00:15:57.465 fused_ordering(155) 00:15:57.465 fused_ordering(156) 00:15:57.465 fused_ordering(157) 00:15:57.465 fused_ordering(158) 00:15:57.465 fused_ordering(159) 00:15:57.465 fused_ordering(160) 00:15:57.465 fused_ordering(161) 00:15:57.465 fused_ordering(162) 00:15:57.465 fused_ordering(163) 00:15:57.465 fused_ordering(164) 00:15:57.465 fused_ordering(165) 00:15:57.465 fused_ordering(166) 00:15:57.465 fused_ordering(167) 00:15:57.465 fused_ordering(168) 00:15:57.465 fused_ordering(169) 00:15:57.465 fused_ordering(170) 00:15:57.465 fused_ordering(171) 00:15:57.465 fused_ordering(172) 00:15:57.465 fused_ordering(173) 00:15:57.465 fused_ordering(174) 00:15:57.465 fused_ordering(175) 00:15:57.465 fused_ordering(176) 00:15:57.465 fused_ordering(177) 00:15:57.465 fused_ordering(178) 00:15:57.465 fused_ordering(179) 00:15:57.465 fused_ordering(180) 00:15:57.465 fused_ordering(181) 00:15:57.465 fused_ordering(182) 00:15:57.465 fused_ordering(183) 00:15:57.465 fused_ordering(184) 00:15:57.465 fused_ordering(185) 00:15:57.465 fused_ordering(186) 00:15:57.465 fused_ordering(187) 00:15:57.465 fused_ordering(188) 00:15:57.465 fused_ordering(189) 00:15:57.465 fused_ordering(190) 00:15:57.465 fused_ordering(191) 00:15:57.465 fused_ordering(192) 00:15:57.465 fused_ordering(193) 00:15:57.465 fused_ordering(194) 00:15:57.465 fused_ordering(195) 00:15:57.465 fused_ordering(196) 00:15:57.465 fused_ordering(197) 00:15:57.465 fused_ordering(198) 00:15:57.465 fused_ordering(199) 00:15:57.465 fused_ordering(200) 00:15:57.465 fused_ordering(201) 00:15:57.465 fused_ordering(202) 00:15:57.465 fused_ordering(203) 00:15:57.465 fused_ordering(204) 00:15:57.465 fused_ordering(205) 00:15:58.032 fused_ordering(206) 00:15:58.032 fused_ordering(207) 00:15:58.032 fused_ordering(208) 00:15:58.032 fused_ordering(209) 00:15:58.032 fused_ordering(210) 00:15:58.032 fused_ordering(211) 00:15:58.032 fused_ordering(212) 00:15:58.032 fused_ordering(213) 00:15:58.032 fused_ordering(214) 00:15:58.032 fused_ordering(215) 00:15:58.032 fused_ordering(216) 00:15:58.032 fused_ordering(217) 00:15:58.032 fused_ordering(218) 00:15:58.032 fused_ordering(219) 00:15:58.032 fused_ordering(220) 00:15:58.032 fused_ordering(221) 00:15:58.032 fused_ordering(222) 00:15:58.032 fused_ordering(223) 00:15:58.032 fused_ordering(224) 00:15:58.032 fused_ordering(225) 00:15:58.032 fused_ordering(226) 00:15:58.032 fused_ordering(227) 00:15:58.032 fused_ordering(228) 00:15:58.032 fused_ordering(229) 00:15:58.032 fused_ordering(230) 00:15:58.032 fused_ordering(231) 00:15:58.032 fused_ordering(232) 00:15:58.032 fused_ordering(233) 00:15:58.032 fused_ordering(234) 00:15:58.032 fused_ordering(235) 00:15:58.032 fused_ordering(236) 00:15:58.032 fused_ordering(237) 00:15:58.032 fused_ordering(238) 00:15:58.032 fused_ordering(239) 00:15:58.032 fused_ordering(240) 00:15:58.032 fused_ordering(241) 00:15:58.032 fused_ordering(242) 00:15:58.032 fused_ordering(243) 00:15:58.032 fused_ordering(244) 00:15:58.032 fused_ordering(245) 00:15:58.032 fused_ordering(246) 00:15:58.032 fused_ordering(247) 00:15:58.032 fused_ordering(248) 00:15:58.032 fused_ordering(249) 00:15:58.032 fused_ordering(250) 00:15:58.032 fused_ordering(251) 00:15:58.032 fused_ordering(252) 00:15:58.032 fused_ordering(253) 00:15:58.032 fused_ordering(254) 00:15:58.032 fused_ordering(255) 00:15:58.032 fused_ordering(256) 00:15:58.032 fused_ordering(257) 00:15:58.032 fused_ordering(258) 00:15:58.032 fused_ordering(259) 00:15:58.032 fused_ordering(260) 00:15:58.032 fused_ordering(261) 00:15:58.032 fused_ordering(262) 00:15:58.032 fused_ordering(263) 00:15:58.032 fused_ordering(264) 00:15:58.032 fused_ordering(265) 00:15:58.032 fused_ordering(266) 00:15:58.032 fused_ordering(267) 00:15:58.032 fused_ordering(268) 00:15:58.032 fused_ordering(269) 00:15:58.032 fused_ordering(270) 00:15:58.032 fused_ordering(271) 00:15:58.032 fused_ordering(272) 00:15:58.032 fused_ordering(273) 00:15:58.032 fused_ordering(274) 00:15:58.032 fused_ordering(275) 00:15:58.032 fused_ordering(276) 00:15:58.032 fused_ordering(277) 00:15:58.032 fused_ordering(278) 00:15:58.032 fused_ordering(279) 00:15:58.032 fused_ordering(280) 00:15:58.032 fused_ordering(281) 00:15:58.032 fused_ordering(282) 00:15:58.032 fused_ordering(283) 00:15:58.032 fused_ordering(284) 00:15:58.032 fused_ordering(285) 00:15:58.032 fused_ordering(286) 00:15:58.032 fused_ordering(287) 00:15:58.032 fused_ordering(288) 00:15:58.032 fused_ordering(289) 00:15:58.032 fused_ordering(290) 00:15:58.032 fused_ordering(291) 00:15:58.032 fused_ordering(292) 00:15:58.032 fused_ordering(293) 00:15:58.032 fused_ordering(294) 00:15:58.032 fused_ordering(295) 00:15:58.032 fused_ordering(296) 00:15:58.032 fused_ordering(297) 00:15:58.032 fused_ordering(298) 00:15:58.032 fused_ordering(299) 00:15:58.032 fused_ordering(300) 00:15:58.032 fused_ordering(301) 00:15:58.032 fused_ordering(302) 00:15:58.032 fused_ordering(303) 00:15:58.032 fused_ordering(304) 00:15:58.032 fused_ordering(305) 00:15:58.032 fused_ordering(306) 00:15:58.032 fused_ordering(307) 00:15:58.032 fused_ordering(308) 00:15:58.032 fused_ordering(309) 00:15:58.032 fused_ordering(310) 00:15:58.032 fused_ordering(311) 00:15:58.032 fused_ordering(312) 00:15:58.032 fused_ordering(313) 00:15:58.032 fused_ordering(314) 00:15:58.032 fused_ordering(315) 00:15:58.032 fused_ordering(316) 00:15:58.032 fused_ordering(317) 00:15:58.032 fused_ordering(318) 00:15:58.032 fused_ordering(319) 00:15:58.032 fused_ordering(320) 00:15:58.032 fused_ordering(321) 00:15:58.032 fused_ordering(322) 00:15:58.032 fused_ordering(323) 00:15:58.032 fused_ordering(324) 00:15:58.032 fused_ordering(325) 00:15:58.032 fused_ordering(326) 00:15:58.032 fused_ordering(327) 00:15:58.032 fused_ordering(328) 00:15:58.032 fused_ordering(329) 00:15:58.032 fused_ordering(330) 00:15:58.032 fused_ordering(331) 00:15:58.032 fused_ordering(332) 00:15:58.032 fused_ordering(333) 00:15:58.032 fused_ordering(334) 00:15:58.032 fused_ordering(335) 00:15:58.032 fused_ordering(336) 00:15:58.032 fused_ordering(337) 00:15:58.032 fused_ordering(338) 00:15:58.032 fused_ordering(339) 00:15:58.032 fused_ordering(340) 00:15:58.032 fused_ordering(341) 00:15:58.032 fused_ordering(342) 00:15:58.032 fused_ordering(343) 00:15:58.032 fused_ordering(344) 00:15:58.032 fused_ordering(345) 00:15:58.032 fused_ordering(346) 00:15:58.032 fused_ordering(347) 00:15:58.032 fused_ordering(348) 00:15:58.032 fused_ordering(349) 00:15:58.032 fused_ordering(350) 00:15:58.032 fused_ordering(351) 00:15:58.032 fused_ordering(352) 00:15:58.032 fused_ordering(353) 00:15:58.032 fused_ordering(354) 00:15:58.032 fused_ordering(355) 00:15:58.032 fused_ordering(356) 00:15:58.032 fused_ordering(357) 00:15:58.032 fused_ordering(358) 00:15:58.032 fused_ordering(359) 00:15:58.032 fused_ordering(360) 00:15:58.032 fused_ordering(361) 00:15:58.032 fused_ordering(362) 00:15:58.032 fused_ordering(363) 00:15:58.032 fused_ordering(364) 00:15:58.032 fused_ordering(365) 00:15:58.032 fused_ordering(366) 00:15:58.032 fused_ordering(367) 00:15:58.032 fused_ordering(368) 00:15:58.032 fused_ordering(369) 00:15:58.032 fused_ordering(370) 00:15:58.032 fused_ordering(371) 00:15:58.032 fused_ordering(372) 00:15:58.032 fused_ordering(373) 00:15:58.032 fused_ordering(374) 00:15:58.032 fused_ordering(375) 00:15:58.032 fused_ordering(376) 00:15:58.032 fused_ordering(377) 00:15:58.032 fused_ordering(378) 00:15:58.032 fused_ordering(379) 00:15:58.032 fused_ordering(380) 00:15:58.032 fused_ordering(381) 00:15:58.032 fused_ordering(382) 00:15:58.032 fused_ordering(383) 00:15:58.032 fused_ordering(384) 00:15:58.032 fused_ordering(385) 00:15:58.032 fused_ordering(386) 00:15:58.032 fused_ordering(387) 00:15:58.032 fused_ordering(388) 00:15:58.032 fused_ordering(389) 00:15:58.032 fused_ordering(390) 00:15:58.032 fused_ordering(391) 00:15:58.032 fused_ordering(392) 00:15:58.032 fused_ordering(393) 00:15:58.032 fused_ordering(394) 00:15:58.032 fused_ordering(395) 00:15:58.032 fused_ordering(396) 00:15:58.032 fused_ordering(397) 00:15:58.032 fused_ordering(398) 00:15:58.032 fused_ordering(399) 00:15:58.032 fused_ordering(400) 00:15:58.032 fused_ordering(401) 00:15:58.032 fused_ordering(402) 00:15:58.032 fused_ordering(403) 00:15:58.032 fused_ordering(404) 00:15:58.032 fused_ordering(405) 00:15:58.032 fused_ordering(406) 00:15:58.032 fused_ordering(407) 00:15:58.032 fused_ordering(408) 00:15:58.032 fused_ordering(409) 00:15:58.032 fused_ordering(410) 00:15:58.644 fused_ordering(411) 00:15:58.644 fused_ordering(412) 00:15:58.644 fused_ordering(413) 00:15:58.644 fused_ordering(414) 00:15:58.644 fused_ordering(415) 00:15:58.644 fused_ordering(416) 00:15:58.644 fused_ordering(417) 00:15:58.644 fused_ordering(418) 00:15:58.644 fused_ordering(419) 00:15:58.644 fused_ordering(420) 00:15:58.644 fused_ordering(421) 00:15:58.644 fused_ordering(422) 00:15:58.644 fused_ordering(423) 00:15:58.644 fused_ordering(424) 00:15:58.644 fused_ordering(425) 00:15:58.644 fused_ordering(426) 00:15:58.644 fused_ordering(427) 00:15:58.644 fused_ordering(428) 00:15:58.644 fused_ordering(429) 00:15:58.644 fused_ordering(430) 00:15:58.644 fused_ordering(431) 00:15:58.644 fused_ordering(432) 00:15:58.644 fused_ordering(433) 00:15:58.644 fused_ordering(434) 00:15:58.644 fused_ordering(435) 00:15:58.644 fused_ordering(436) 00:15:58.644 fused_ordering(437) 00:15:58.644 fused_ordering(438) 00:15:58.644 fused_ordering(439) 00:15:58.644 fused_ordering(440) 00:15:58.644 fused_ordering(441) 00:15:58.644 fused_ordering(442) 00:15:58.644 fused_ordering(443) 00:15:58.644 fused_ordering(444) 00:15:58.644 fused_ordering(445) 00:15:58.644 fused_ordering(446) 00:15:58.644 fused_ordering(447) 00:15:58.644 fused_ordering(448) 00:15:58.644 fused_ordering(449) 00:15:58.644 fused_ordering(450) 00:15:58.644 fused_ordering(451) 00:15:58.644 fused_ordering(452) 00:15:58.644 fused_ordering(453) 00:15:58.644 fused_ordering(454) 00:15:58.644 fused_ordering(455) 00:15:58.644 fused_ordering(456) 00:15:58.644 fused_ordering(457) 00:15:58.644 fused_ordering(458) 00:15:58.644 fused_ordering(459) 00:15:58.644 fused_ordering(460) 00:15:58.644 fused_ordering(461) 00:15:58.644 fused_ordering(462) 00:15:58.644 fused_ordering(463) 00:15:58.644 fused_ordering(464) 00:15:58.644 fused_ordering(465) 00:15:58.644 fused_ordering(466) 00:15:58.644 fused_ordering(467) 00:15:58.644 fused_ordering(468) 00:15:58.644 fused_ordering(469) 00:15:58.644 fused_ordering(470) 00:15:58.644 fused_ordering(471) 00:15:58.644 fused_ordering(472) 00:15:58.644 fused_ordering(473) 00:15:58.644 fused_ordering(474) 00:15:58.644 fused_ordering(475) 00:15:58.644 fused_ordering(476) 00:15:58.644 fused_ordering(477) 00:15:58.644 fused_ordering(478) 00:15:58.644 fused_ordering(479) 00:15:58.644 fused_ordering(480) 00:15:58.644 fused_ordering(481) 00:15:58.644 fused_ordering(482) 00:15:58.644 fused_ordering(483) 00:15:58.644 fused_ordering(484) 00:15:58.644 fused_ordering(485) 00:15:58.644 fused_ordering(486) 00:15:58.644 fused_ordering(487) 00:15:58.644 fused_ordering(488) 00:15:58.644 fused_ordering(489) 00:15:58.644 fused_ordering(490) 00:15:58.644 fused_ordering(491) 00:15:58.644 fused_ordering(492) 00:15:58.644 fused_ordering(493) 00:15:58.644 fused_ordering(494) 00:15:58.644 fused_ordering(495) 00:15:58.644 fused_ordering(496) 00:15:58.644 fused_ordering(497) 00:15:58.644 fused_ordering(498) 00:15:58.644 fused_ordering(499) 00:15:58.644 fused_ordering(500) 00:15:58.644 fused_ordering(501) 00:15:58.644 fused_ordering(502) 00:15:58.644 fused_ordering(503) 00:15:58.644 fused_ordering(504) 00:15:58.644 fused_ordering(505) 00:15:58.644 fused_ordering(506) 00:15:58.644 fused_ordering(507) 00:15:58.644 fused_ordering(508) 00:15:58.644 fused_ordering(509) 00:15:58.644 fused_ordering(510) 00:15:58.644 fused_ordering(511) 00:15:58.645 fused_ordering(512) 00:15:58.645 fused_ordering(513) 00:15:58.645 fused_ordering(514) 00:15:58.645 fused_ordering(515) 00:15:58.645 fused_ordering(516) 00:15:58.645 fused_ordering(517) 00:15:58.645 fused_ordering(518) 00:15:58.645 fused_ordering(519) 00:15:58.645 fused_ordering(520) 00:15:58.645 fused_ordering(521) 00:15:58.645 fused_ordering(522) 00:15:58.645 fused_ordering(523) 00:15:58.645 fused_ordering(524) 00:15:58.645 fused_ordering(525) 00:15:58.645 fused_ordering(526) 00:15:58.645 fused_ordering(527) 00:15:58.645 fused_ordering(528) 00:15:58.645 fused_ordering(529) 00:15:58.645 fused_ordering(530) 00:15:58.645 fused_ordering(531) 00:15:58.645 fused_ordering(532) 00:15:58.645 fused_ordering(533) 00:15:58.645 fused_ordering(534) 00:15:58.645 fused_ordering(535) 00:15:58.645 fused_ordering(536) 00:15:58.645 fused_ordering(537) 00:15:58.645 fused_ordering(538) 00:15:58.645 fused_ordering(539) 00:15:58.645 fused_ordering(540) 00:15:58.645 fused_ordering(541) 00:15:58.645 fused_ordering(542) 00:15:58.645 fused_ordering(543) 00:15:58.645 fused_ordering(544) 00:15:58.645 fused_ordering(545) 00:15:58.645 fused_ordering(546) 00:15:58.645 fused_ordering(547) 00:15:58.645 fused_ordering(548) 00:15:58.645 fused_ordering(549) 00:15:58.645 fused_ordering(550) 00:15:58.645 fused_ordering(551) 00:15:58.645 fused_ordering(552) 00:15:58.645 fused_ordering(553) 00:15:58.645 fused_ordering(554) 00:15:58.645 fused_ordering(555) 00:15:58.645 fused_ordering(556) 00:15:58.645 fused_ordering(557) 00:15:58.645 fused_ordering(558) 00:15:58.645 fused_ordering(559) 00:15:58.645 fused_ordering(560) 00:15:58.645 fused_ordering(561) 00:15:58.645 fused_ordering(562) 00:15:58.645 fused_ordering(563) 00:15:58.645 fused_ordering(564) 00:15:58.645 fused_ordering(565) 00:15:58.645 fused_ordering(566) 00:15:58.645 fused_ordering(567) 00:15:58.645 fused_ordering(568) 00:15:58.645 fused_ordering(569) 00:15:58.645 fused_ordering(570) 00:15:58.645 fused_ordering(571) 00:15:58.645 fused_ordering(572) 00:15:58.645 fused_ordering(573) 00:15:58.645 fused_ordering(574) 00:15:58.645 fused_ordering(575) 00:15:58.645 fused_ordering(576) 00:15:58.645 fused_ordering(577) 00:15:58.645 fused_ordering(578) 00:15:58.645 fused_ordering(579) 00:15:58.645 fused_ordering(580) 00:15:58.645 fused_ordering(581) 00:15:58.645 fused_ordering(582) 00:15:58.645 fused_ordering(583) 00:15:58.645 fused_ordering(584) 00:15:58.645 fused_ordering(585) 00:15:58.645 fused_ordering(586) 00:15:58.645 fused_ordering(587) 00:15:58.645 fused_ordering(588) 00:15:58.645 fused_ordering(589) 00:15:58.645 fused_ordering(590) 00:15:58.645 fused_ordering(591) 00:15:58.645 fused_ordering(592) 00:15:58.645 fused_ordering(593) 00:15:58.645 fused_ordering(594) 00:15:58.645 fused_ordering(595) 00:15:58.645 fused_ordering(596) 00:15:58.645 fused_ordering(597) 00:15:58.645 fused_ordering(598) 00:15:58.645 fused_ordering(599) 00:15:58.645 fused_ordering(600) 00:15:58.645 fused_ordering(601) 00:15:58.645 fused_ordering(602) 00:15:58.645 fused_ordering(603) 00:15:58.645 fused_ordering(604) 00:15:58.645 fused_ordering(605) 00:15:58.645 fused_ordering(606) 00:15:58.645 fused_ordering(607) 00:15:58.645 fused_ordering(608) 00:15:58.645 fused_ordering(609) 00:15:58.645 fused_ordering(610) 00:15:58.645 fused_ordering(611) 00:15:58.645 fused_ordering(612) 00:15:58.645 fused_ordering(613) 00:15:58.645 fused_ordering(614) 00:15:58.645 fused_ordering(615) 00:15:59.212 fused_ordering(616) 00:15:59.212 fused_ordering(617) 00:15:59.212 fused_ordering(618) 00:15:59.212 fused_ordering(619) 00:15:59.212 fused_ordering(620) 00:15:59.212 fused_ordering(621) 00:15:59.212 fused_ordering(622) 00:15:59.212 fused_ordering(623) 00:15:59.212 fused_ordering(624) 00:15:59.212 fused_ordering(625) 00:15:59.212 fused_ordering(626) 00:15:59.212 fused_ordering(627) 00:15:59.212 fused_ordering(628) 00:15:59.212 fused_ordering(629) 00:15:59.212 fused_ordering(630) 00:15:59.212 fused_ordering(631) 00:15:59.212 fused_ordering(632) 00:15:59.212 fused_ordering(633) 00:15:59.212 fused_ordering(634) 00:15:59.212 fused_ordering(635) 00:15:59.212 fused_ordering(636) 00:15:59.212 fused_ordering(637) 00:15:59.212 fused_ordering(638) 00:15:59.212 fused_ordering(639) 00:15:59.212 fused_ordering(640) 00:15:59.212 fused_ordering(641) 00:15:59.212 fused_ordering(642) 00:15:59.212 fused_ordering(643) 00:15:59.212 fused_ordering(644) 00:15:59.212 fused_ordering(645) 00:15:59.212 fused_ordering(646) 00:15:59.212 fused_ordering(647) 00:15:59.212 fused_ordering(648) 00:15:59.212 fused_ordering(649) 00:15:59.212 fused_ordering(650) 00:15:59.212 fused_ordering(651) 00:15:59.212 fused_ordering(652) 00:15:59.212 fused_ordering(653) 00:15:59.212 fused_ordering(654) 00:15:59.212 fused_ordering(655) 00:15:59.212 fused_ordering(656) 00:15:59.212 fused_ordering(657) 00:15:59.212 fused_ordering(658) 00:15:59.212 fused_ordering(659) 00:15:59.212 fused_ordering(660) 00:15:59.212 fused_ordering(661) 00:15:59.212 fused_ordering(662) 00:15:59.212 fused_ordering(663) 00:15:59.212 fused_ordering(664) 00:15:59.212 fused_ordering(665) 00:15:59.212 fused_ordering(666) 00:15:59.212 fused_ordering(667) 00:15:59.212 fused_ordering(668) 00:15:59.212 fused_ordering(669) 00:15:59.212 fused_ordering(670) 00:15:59.212 fused_ordering(671) 00:15:59.212 fused_ordering(672) 00:15:59.212 fused_ordering(673) 00:15:59.212 fused_ordering(674) 00:15:59.212 fused_ordering(675) 00:15:59.212 fused_ordering(676) 00:15:59.212 fused_ordering(677) 00:15:59.212 fused_ordering(678) 00:15:59.212 fused_ordering(679) 00:15:59.212 fused_ordering(680) 00:15:59.212 fused_ordering(681) 00:15:59.212 fused_ordering(682) 00:15:59.212 fused_ordering(683) 00:15:59.212 fused_ordering(684) 00:15:59.212 fused_ordering(685) 00:15:59.212 fused_ordering(686) 00:15:59.212 fused_ordering(687) 00:15:59.212 fused_ordering(688) 00:15:59.212 fused_ordering(689) 00:15:59.212 fused_ordering(690) 00:15:59.212 fused_ordering(691) 00:15:59.212 fused_ordering(692) 00:15:59.212 fused_ordering(693) 00:15:59.212 fused_ordering(694) 00:15:59.212 fused_ordering(695) 00:15:59.212 fused_ordering(696) 00:15:59.212 fused_ordering(697) 00:15:59.212 fused_ordering(698) 00:15:59.212 fused_ordering(699) 00:15:59.212 fused_ordering(700) 00:15:59.212 fused_ordering(701) 00:15:59.212 fused_ordering(702) 00:15:59.212 fused_ordering(703) 00:15:59.212 fused_ordering(704) 00:15:59.212 fused_ordering(705) 00:15:59.212 fused_ordering(706) 00:15:59.212 fused_ordering(707) 00:15:59.212 fused_ordering(708) 00:15:59.212 fused_ordering(709) 00:15:59.212 fused_ordering(710) 00:15:59.212 fused_ordering(711) 00:15:59.212 fused_ordering(712) 00:15:59.212 fused_ordering(713) 00:15:59.212 fused_ordering(714) 00:15:59.212 fused_ordering(715) 00:15:59.212 fused_ordering(716) 00:15:59.212 fused_ordering(717) 00:15:59.212 fused_ordering(718) 00:15:59.212 fused_ordering(719) 00:15:59.212 fused_ordering(720) 00:15:59.212 fused_ordering(721) 00:15:59.212 fused_ordering(722) 00:15:59.212 fused_ordering(723) 00:15:59.212 fused_ordering(724) 00:15:59.212 fused_ordering(725) 00:15:59.212 fused_ordering(726) 00:15:59.212 fused_ordering(727) 00:15:59.212 fused_ordering(728) 00:15:59.212 fused_ordering(729) 00:15:59.212 fused_ordering(730) 00:15:59.212 fused_ordering(731) 00:15:59.212 fused_ordering(732) 00:15:59.212 fused_ordering(733) 00:15:59.212 fused_ordering(734) 00:15:59.212 fused_ordering(735) 00:15:59.212 fused_ordering(736) 00:15:59.212 fused_ordering(737) 00:15:59.212 fused_ordering(738) 00:15:59.212 fused_ordering(739) 00:15:59.212 fused_ordering(740) 00:15:59.212 fused_ordering(741) 00:15:59.212 fused_ordering(742) 00:15:59.212 fused_ordering(743) 00:15:59.212 fused_ordering(744) 00:15:59.212 fused_ordering(745) 00:15:59.212 fused_ordering(746) 00:15:59.212 fused_ordering(747) 00:15:59.212 fused_ordering(748) 00:15:59.212 fused_ordering(749) 00:15:59.212 fused_ordering(750) 00:15:59.212 fused_ordering(751) 00:15:59.212 fused_ordering(752) 00:15:59.212 fused_ordering(753) 00:15:59.212 fused_ordering(754) 00:15:59.212 fused_ordering(755) 00:15:59.212 fused_ordering(756) 00:15:59.212 fused_ordering(757) 00:15:59.212 fused_ordering(758) 00:15:59.212 fused_ordering(759) 00:15:59.212 fused_ordering(760) 00:15:59.212 fused_ordering(761) 00:15:59.212 fused_ordering(762) 00:15:59.212 fused_ordering(763) 00:15:59.212 fused_ordering(764) 00:15:59.212 fused_ordering(765) 00:15:59.212 fused_ordering(766) 00:15:59.212 fused_ordering(767) 00:15:59.212 fused_ordering(768) 00:15:59.212 fused_ordering(769) 00:15:59.212 fused_ordering(770) 00:15:59.212 fused_ordering(771) 00:15:59.212 fused_ordering(772) 00:15:59.212 fused_ordering(773) 00:15:59.212 fused_ordering(774) 00:15:59.212 fused_ordering(775) 00:15:59.212 fused_ordering(776) 00:15:59.212 fused_ordering(777) 00:15:59.212 fused_ordering(778) 00:15:59.212 fused_ordering(779) 00:15:59.212 fused_ordering(780) 00:15:59.212 fused_ordering(781) 00:15:59.212 fused_ordering(782) 00:15:59.212 fused_ordering(783) 00:15:59.212 fused_ordering(784) 00:15:59.212 fused_ordering(785) 00:15:59.212 fused_ordering(786) 00:15:59.212 fused_ordering(787) 00:15:59.212 fused_ordering(788) 00:15:59.212 fused_ordering(789) 00:15:59.212 fused_ordering(790) 00:15:59.212 fused_ordering(791) 00:15:59.212 fused_ordering(792) 00:15:59.212 fused_ordering(793) 00:15:59.212 fused_ordering(794) 00:15:59.212 fused_ordering(795) 00:15:59.212 fused_ordering(796) 00:15:59.212 fused_ordering(797) 00:15:59.212 fused_ordering(798) 00:15:59.212 fused_ordering(799) 00:15:59.212 fused_ordering(800) 00:15:59.212 fused_ordering(801) 00:15:59.212 fused_ordering(802) 00:15:59.212 fused_ordering(803) 00:15:59.212 fused_ordering(804) 00:15:59.212 fused_ordering(805) 00:15:59.212 fused_ordering(806) 00:15:59.212 fused_ordering(807) 00:15:59.212 fused_ordering(808) 00:15:59.212 fused_ordering(809) 00:15:59.212 fused_ordering(810) 00:15:59.212 fused_ordering(811) 00:15:59.212 fused_ordering(812) 00:15:59.212 fused_ordering(813) 00:15:59.212 fused_ordering(814) 00:15:59.212 fused_ordering(815) 00:15:59.212 fused_ordering(816) 00:15:59.212 fused_ordering(817) 00:15:59.212 fused_ordering(818) 00:15:59.212 fused_ordering(819) 00:15:59.212 fused_ordering(820) 00:16:00.148 fused_ordering(821) 00:16:00.148 fused_ordering(822) 00:16:00.148 fused_ordering(823) 00:16:00.148 fused_ordering(824) 00:16:00.148 fused_ordering(825) 00:16:00.148 fused_ordering(826) 00:16:00.148 fused_ordering(827) 00:16:00.148 fused_ordering(828) 00:16:00.148 fused_ordering(829) 00:16:00.148 fused_ordering(830) 00:16:00.148 fused_ordering(831) 00:16:00.148 fused_ordering(832) 00:16:00.148 fused_ordering(833) 00:16:00.148 fused_ordering(834) 00:16:00.148 fused_ordering(835) 00:16:00.148 fused_ordering(836) 00:16:00.148 fused_ordering(837) 00:16:00.148 fused_ordering(838) 00:16:00.148 fused_ordering(839) 00:16:00.148 fused_ordering(840) 00:16:00.148 fused_ordering(841) 00:16:00.148 fused_ordering(842) 00:16:00.148 fused_ordering(843) 00:16:00.148 fused_ordering(844) 00:16:00.148 fused_ordering(845) 00:16:00.148 fused_ordering(846) 00:16:00.148 fused_ordering(847) 00:16:00.148 fused_ordering(848) 00:16:00.148 fused_ordering(849) 00:16:00.148 fused_ordering(850) 00:16:00.148 fused_ordering(851) 00:16:00.148 fused_ordering(852) 00:16:00.148 fused_ordering(853) 00:16:00.148 fused_ordering(854) 00:16:00.148 fused_ordering(855) 00:16:00.148 fused_ordering(856) 00:16:00.148 fused_ordering(857) 00:16:00.148 fused_ordering(858) 00:16:00.148 fused_ordering(859) 00:16:00.148 fused_ordering(860) 00:16:00.148 fused_ordering(861) 00:16:00.148 fused_ordering(862) 00:16:00.148 fused_ordering(863) 00:16:00.148 fused_ordering(864) 00:16:00.148 fused_ordering(865) 00:16:00.148 fused_ordering(866) 00:16:00.148 fused_ordering(867) 00:16:00.148 fused_ordering(868) 00:16:00.148 fused_ordering(869) 00:16:00.148 fused_ordering(870) 00:16:00.148 fused_ordering(871) 00:16:00.148 fused_ordering(872) 00:16:00.148 fused_ordering(873) 00:16:00.148 fused_ordering(874) 00:16:00.148 fused_ordering(875) 00:16:00.148 fused_ordering(876) 00:16:00.148 fused_ordering(877) 00:16:00.148 fused_ordering(878) 00:16:00.148 fused_ordering(879) 00:16:00.148 fused_ordering(880) 00:16:00.148 fused_ordering(881) 00:16:00.148 fused_ordering(882) 00:16:00.148 fused_ordering(883) 00:16:00.148 fused_ordering(884) 00:16:00.148 fused_ordering(885) 00:16:00.148 fused_ordering(886) 00:16:00.148 fused_ordering(887) 00:16:00.148 fused_ordering(888) 00:16:00.148 fused_ordering(889) 00:16:00.148 fused_ordering(890) 00:16:00.148 fused_ordering(891) 00:16:00.148 fused_ordering(892) 00:16:00.148 fused_ordering(893) 00:16:00.148 fused_ordering(894) 00:16:00.148 fused_ordering(895) 00:16:00.148 fused_ordering(896) 00:16:00.148 fused_ordering(897) 00:16:00.148 fused_ordering(898) 00:16:00.148 fused_ordering(899) 00:16:00.148 fused_ordering(900) 00:16:00.148 fused_ordering(901) 00:16:00.148 fused_ordering(902) 00:16:00.148 fused_ordering(903) 00:16:00.148 fused_ordering(904) 00:16:00.148 fused_ordering(905) 00:16:00.148 fused_ordering(906) 00:16:00.148 fused_ordering(907) 00:16:00.148 fused_ordering(908) 00:16:00.148 fused_ordering(909) 00:16:00.148 fused_ordering(910) 00:16:00.148 fused_ordering(911) 00:16:00.148 fused_ordering(912) 00:16:00.148 fused_ordering(913) 00:16:00.148 fused_ordering(914) 00:16:00.148 fused_ordering(915) 00:16:00.148 fused_ordering(916) 00:16:00.148 fused_ordering(917) 00:16:00.148 fused_ordering(918) 00:16:00.148 fused_ordering(919) 00:16:00.148 fused_ordering(920) 00:16:00.148 fused_ordering(921) 00:16:00.148 fused_ordering(922) 00:16:00.148 fused_ordering(923) 00:16:00.148 fused_ordering(924) 00:16:00.148 fused_ordering(925) 00:16:00.148 fused_ordering(926) 00:16:00.148 fused_ordering(927) 00:16:00.148 fused_ordering(928) 00:16:00.148 fused_ordering(929) 00:16:00.148 fused_ordering(930) 00:16:00.148 fused_ordering(931) 00:16:00.148 fused_ordering(932) 00:16:00.148 fused_ordering(933) 00:16:00.148 fused_ordering(934) 00:16:00.148 fused_ordering(935) 00:16:00.148 fused_ordering(936) 00:16:00.148 fused_ordering(937) 00:16:00.148 fused_ordering(938) 00:16:00.148 fused_ordering(939) 00:16:00.148 fused_ordering(940) 00:16:00.148 fused_ordering(941) 00:16:00.148 fused_ordering(942) 00:16:00.148 fused_ordering(943) 00:16:00.148 fused_ordering(944) 00:16:00.148 fused_ordering(945) 00:16:00.148 fused_ordering(946) 00:16:00.148 fused_ordering(947) 00:16:00.148 fused_ordering(948) 00:16:00.148 fused_ordering(949) 00:16:00.148 fused_ordering(950) 00:16:00.148 fused_ordering(951) 00:16:00.148 fused_ordering(952) 00:16:00.148 fused_ordering(953) 00:16:00.148 fused_ordering(954) 00:16:00.148 fused_ordering(955) 00:16:00.148 fused_ordering(956) 00:16:00.148 fused_ordering(957) 00:16:00.148 fused_ordering(958) 00:16:00.148 fused_ordering(959) 00:16:00.148 fused_ordering(960) 00:16:00.148 fused_ordering(961) 00:16:00.148 fused_ordering(962) 00:16:00.148 fused_ordering(963) 00:16:00.148 fused_ordering(964) 00:16:00.148 fused_ordering(965) 00:16:00.148 fused_ordering(966) 00:16:00.148 fused_ordering(967) 00:16:00.148 fused_ordering(968) 00:16:00.148 fused_ordering(969) 00:16:00.148 fused_ordering(970) 00:16:00.148 fused_ordering(971) 00:16:00.148 fused_ordering(972) 00:16:00.148 fused_ordering(973) 00:16:00.148 fused_ordering(974) 00:16:00.148 fused_ordering(975) 00:16:00.148 fused_ordering(976) 00:16:00.148 fused_ordering(977) 00:16:00.148 fused_ordering(978) 00:16:00.148 fused_ordering(979) 00:16:00.148 fused_ordering(980) 00:16:00.149 fused_ordering(981) 00:16:00.149 fused_ordering(982) 00:16:00.149 fused_ordering(983) 00:16:00.149 fused_ordering(984) 00:16:00.149 fused_ordering(985) 00:16:00.149 fused_ordering(986) 00:16:00.149 fused_ordering(987) 00:16:00.149 fused_ordering(988) 00:16:00.149 fused_ordering(989) 00:16:00.149 fused_ordering(990) 00:16:00.149 fused_ordering(991) 00:16:00.149 fused_ordering(992) 00:16:00.149 fused_ordering(993) 00:16:00.149 fused_ordering(994) 00:16:00.149 fused_ordering(995) 00:16:00.149 fused_ordering(996) 00:16:00.149 fused_ordering(997) 00:16:00.149 fused_ordering(998) 00:16:00.149 fused_ordering(999) 00:16:00.149 fused_ordering(1000) 00:16:00.149 fused_ordering(1001) 00:16:00.149 fused_ordering(1002) 00:16:00.149 fused_ordering(1003) 00:16:00.149 fused_ordering(1004) 00:16:00.149 fused_ordering(1005) 00:16:00.149 fused_ordering(1006) 00:16:00.149 fused_ordering(1007) 00:16:00.149 fused_ordering(1008) 00:16:00.149 fused_ordering(1009) 00:16:00.149 fused_ordering(1010) 00:16:00.149 fused_ordering(1011) 00:16:00.149 fused_ordering(1012) 00:16:00.149 fused_ordering(1013) 00:16:00.149 fused_ordering(1014) 00:16:00.149 fused_ordering(1015) 00:16:00.149 fused_ordering(1016) 00:16:00.149 fused_ordering(1017) 00:16:00.149 fused_ordering(1018) 00:16:00.149 fused_ordering(1019) 00:16:00.149 fused_ordering(1020) 00:16:00.149 fused_ordering(1021) 00:16:00.149 fused_ordering(1022) 00:16:00.149 fused_ordering(1023) 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:00.149 rmmod nvme_tcp 00:16:00.149 rmmod nvme_fabrics 00:16:00.149 rmmod nvme_keyring 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 242897 ']' 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 242897 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 242897 ']' 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 242897 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 242897 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 242897' 00:16:00.149 killing process with pid 242897 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 242897 00:16:00.149 13:27:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 242897 00:16:01.523 13:27:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:01.523 13:27:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:01.523 13:27:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:01.523 13:27:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:01.523 13:27:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:01.523 13:27:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.523 13:27:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.523 13:27:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.055 13:27:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:04.055 00:16:04.055 real 0m9.865s 00:16:04.055 user 0m7.920s 00:16:04.055 sys 0m3.704s 00:16:04.055 13:27:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.055 13:27:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:04.055 ************************************ 00:16:04.055 END TEST nvmf_fused_ordering 00:16:04.055 ************************************ 00:16:04.055 13:27:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:04.055 13:27:38 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:04.055 13:27:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:04.055 13:27:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.055 13:27:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:04.055 ************************************ 00:16:04.055 START TEST nvmf_delete_subsystem 00:16:04.055 ************************************ 00:16:04.055 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:04.055 * Looking for test storage... 00:16:04.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.055 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.055 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:16:04.055 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.055 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.055 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.055 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.055 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.055 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.055 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.055 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.055 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.055 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:16:04.056 13:27:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:05.953 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:05.953 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:05.953 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:05.953 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:05.953 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:05.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:16:05.954 00:16:05.954 --- 10.0.0.2 ping statistics --- 00:16:05.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.954 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:05.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:16:05.954 00:16:05.954 --- 10.0.0.1 ping statistics --- 00:16:05.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.954 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=245493 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 245493 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 245493 ']' 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.954 13:27:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:05.954 [2024-07-13 13:27:40.470200] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:05.954 [2024-07-13 13:27:40.470343] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.954 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.954 [2024-07-13 13:27:40.610787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:06.211 [2024-07-13 13:27:40.870887] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.211 [2024-07-13 13:27:40.870975] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.211 [2024-07-13 13:27:40.871009] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.211 [2024-07-13 13:27:40.871031] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.211 [2024-07-13 13:27:40.871052] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.211 [2024-07-13 13:27:40.871171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.211 [2024-07-13 13:27:40.871179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.774 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.774 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:16:06.774 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:06.774 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.775 [2024-07-13 13:27:41.395790] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.775 [2024-07-13 13:27:41.412756] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.775 NULL1 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.775 Delay0 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=245565 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:06.775 13:27:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:07.032 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.032 [2024-07-13 13:27:41.547824] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:08.929 13:27:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:08.929 13:27:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.929 13:27:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 [2024-07-13 13:27:43.698197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(5) to be set 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Write completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.188 starting I/O failed: -6 00:16:09.188 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 starting I/O failed: -6 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 starting I/O failed: -6 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 starting I/O failed: -6 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 starting I/O failed: -6 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 starting I/O failed: -6 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 [2024-07-13 13:27:43.699954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016100 is same with the state(5) to be set 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Write completed with error (sct=0, sc=8) 00:16:09.189 Read completed with error (sct=0, sc=8) 00:16:09.189 [2024-07-13 13:27:43.701193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(5) to be set 00:16:10.123 [2024-07-13 13:27:44.649155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(5) to be set 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 [2024-07-13 13:27:44.700405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(5) to be set 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 [2024-07-13 13:27:44.701075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(5) to be set 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 [2024-07-13 13:27:44.702012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(5) to be set 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 13:27:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.123 13:27:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:16:10.123 13:27:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 245565 00:16:10.123 13:27:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Read completed with error (sct=0, sc=8) 00:16:10.123 Write completed with error (sct=0, sc=8) 00:16:10.123 [2024-07-13 13:27:44.706222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(5) to be set 00:16:10.123 Initializing NVMe Controllers 00:16:10.123 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:10.123 Controller IO queue size 128, less than required. 00:16:10.123 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:10.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:10.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:10.123 Initialization complete. Launching workers. 00:16:10.123 ======================================================== 00:16:10.123 Latency(us) 00:16:10.123 Device Information : IOPS MiB/s Average min max 00:16:10.123 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.26 0.08 899601.65 1147.87 1015170.13 00:16:10.123 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 168.27 0.08 898328.41 859.89 1015173.43 00:16:10.123 ======================================================== 00:16:10.123 Total : 337.53 0.16 898966.89 859.89 1015173.43 00:16:10.123 00:16:10.123 [2024-07-13 13:27:44.707752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015980 (9): Bad file descriptor 00:16:10.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 245565 00:16:10.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (245565) - No such process 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 245565 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 245565 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 245565 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:10.689 [2024-07-13 13:27:45.225878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.689 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:10.690 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.690 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=246062 00:16:10.690 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:10.690 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:16:10.690 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 246062 00:16:10.690 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:10.690 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.690 [2024-07-13 13:27:45.347636] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:11.257 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:11.257 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 246062 00:16:11.257 13:27:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:11.515 13:27:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:11.515 13:27:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 246062 00:16:11.515 13:27:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:12.078 13:27:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:12.078 13:27:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 246062 00:16:12.078 13:27:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:12.644 13:27:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:12.644 13:27:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 246062 00:16:12.644 13:27:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:13.210 13:27:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:13.210 13:27:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 246062 00:16:13.210 13:27:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:13.774 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:13.774 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 246062 00:16:13.774 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:14.033 Initializing NVMe Controllers 00:16:14.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:14.033 Controller IO queue size 128, less than required. 00:16:14.033 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:14.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:14.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:14.033 Initialization complete. Launching workers. 00:16:14.033 ======================================================== 00:16:14.033 Latency(us) 00:16:14.033 Device Information : IOPS MiB/s Average min max 00:16:14.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004501.72 1000362.29 1014248.93 00:16:14.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006339.98 1000362.86 1014715.38 00:16:14.033 ======================================================== 00:16:14.033 Total : 256.00 0.12 1005420.85 1000362.29 1014715.38 00:16:14.033 00:16:14.033 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:14.033 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 246062 00:16:14.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (246062) - No such process 00:16:14.033 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 246062 00:16:14.033 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:14.033 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:14.033 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.033 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:16:14.033 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.033 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:16:14.033 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.033 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.033 rmmod nvme_tcp 00:16:14.292 rmmod nvme_fabrics 00:16:14.292 rmmod nvme_keyring 00:16:14.292 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.292 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:16:14.292 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:16:14.292 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 245493 ']' 00:16:14.292 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 245493 00:16:14.292 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 245493 ']' 00:16:14.292 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 245493 00:16:14.292 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:16:14.292 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.292 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 245493 00:16:14.292 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:14.292 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:14.292 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 245493' 00:16:14.292 killing process with pid 245493 00:16:14.292 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 245493 00:16:14.292 13:27:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 245493 00:16:15.668 13:27:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:15.668 13:27:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:15.668 13:27:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:15.668 13:27:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.668 13:27:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:15.668 13:27:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.668 13:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.668 13:27:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.601 13:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:17.601 00:16:17.601 real 0m13.948s 00:16:17.601 user 0m30.523s 00:16:17.601 sys 0m3.166s 00:16:17.601 13:27:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:17.601 13:27:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.601 ************************************ 00:16:17.601 END TEST nvmf_delete_subsystem 00:16:17.601 ************************************ 00:16:17.601 13:27:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:17.601 13:27:52 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:17.601 13:27:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:17.601 13:27:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.601 13:27:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:17.601 ************************************ 00:16:17.601 START TEST nvmf_ns_masking 00:16:17.601 ************************************ 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:17.601 * Looking for test storage... 00:16:17.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.601 13:27:52 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3f1ad5ee-2959-4e66-a45f-75dfe971430f 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=3f0a8621-d213-4b21-b1ef-bb8034602821 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=84adcd9e-edf8-4a15-b68f-79c9970ce4df 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:17.602 13:27:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.130 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:20.131 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:20.131 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:20.131 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:20.131 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:20.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:16:20.131 00:16:20.131 --- 10.0.0.2 ping statistics --- 00:16:20.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.131 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:16:20.131 00:16:20.131 --- 10.0.0.1 ping statistics --- 00:16:20.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.131 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=248539 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 248539 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 248539 ']' 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.131 13:27:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:20.131 [2024-07-13 13:27:54.553530] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:20.131 [2024-07-13 13:27:54.553683] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.131 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.131 [2024-07-13 13:27:54.691909] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.389 [2024-07-13 13:27:54.951313] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.389 [2024-07-13 13:27:54.951385] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.389 [2024-07-13 13:27:54.951413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.389 [2024-07-13 13:27:54.951438] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.389 [2024-07-13 13:27:54.951460] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.389 [2024-07-13 13:27:54.951516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.954 13:27:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:20.954 13:27:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:16:20.954 13:27:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:20.954 13:27:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:20.954 13:27:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:20.954 13:27:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.954 13:27:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:21.212 [2024-07-13 13:27:55.766716] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.212 13:27:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:21.212 13:27:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:21.212 13:27:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:21.470 Malloc1 00:16:21.470 13:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:22.035 Malloc2 00:16:22.035 13:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:22.292 13:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:22.550 13:27:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.808 [2024-07-13 13:27:57.377092] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.808 13:27:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:22.808 13:27:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 84adcd9e-edf8-4a15-b68f-79c9970ce4df -a 10.0.0.2 -s 4420 -i 4 00:16:22.808 13:27:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:22.808 13:27:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:22.808 13:27:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:22.808 13:27:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:22.808 13:27:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:25.334 [ 0]:0x1 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=09cdb2a0f3ad45628dae4fd8565965ea 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 09cdb2a0f3ad45628dae4fd8565965ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:25.334 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:25.335 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:25.335 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:25.335 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:25.335 [ 0]:0x1 00:16:25.335 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:25.335 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:25.335 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=09cdb2a0f3ad45628dae4fd8565965ea 00:16:25.335 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 09cdb2a0f3ad45628dae4fd8565965ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:25.335 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:25.335 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:25.335 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:25.335 [ 1]:0x2 00:16:25.335 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:25.335 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:25.335 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a2c0e26da3514e62ac2baf92262796ed 00:16:25.335 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a2c0e26da3514e62ac2baf92262796ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:25.335 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:25.335 13:27:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:25.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.335 13:28:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.593 13:28:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:25.851 13:28:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:25.851 13:28:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 84adcd9e-edf8-4a15-b68f-79c9970ce4df -a 10.0.0.2 -s 4420 -i 4 00:16:26.109 13:28:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:26.109 13:28:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:26.109 13:28:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:26.109 13:28:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:26.109 13:28:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:26.109 13:28:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:28.008 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:28.008 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:28.008 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:28.008 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:28.008 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:28.008 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:28.008 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:28.008 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:28.279 [ 0]:0x2 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a2c0e26da3514e62ac2baf92262796ed 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a2c0e26da3514e62ac2baf92262796ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:28.279 13:28:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:28.545 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:28.545 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:28.545 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:28.545 [ 0]:0x1 00:16:28.545 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:28.545 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:28.545 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=09cdb2a0f3ad45628dae4fd8565965ea 00:16:28.545 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 09cdb2a0f3ad45628dae4fd8565965ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:28.545 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:28.545 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:28.545 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:28.545 [ 1]:0x2 00:16:28.545 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:28.545 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:28.545 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a2c0e26da3514e62ac2baf92262796ed 00:16:28.545 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a2c0e26da3514e62ac2baf92262796ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:28.545 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:28.803 [ 0]:0x2 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:28.803 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:29.061 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a2c0e26da3514e62ac2baf92262796ed 00:16:29.061 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a2c0e26da3514e62ac2baf92262796ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:29.061 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:29.061 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:29.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.061 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:29.318 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:29.318 13:28:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 84adcd9e-edf8-4a15-b68f-79c9970ce4df -a 10.0.0.2 -s 4420 -i 4 00:16:29.318 13:28:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:29.318 13:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:29.318 13:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:29.318 13:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:29.318 13:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:29.318 13:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:31.846 [ 0]:0x1 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=09cdb2a0f3ad45628dae4fd8565965ea 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 09cdb2a0f3ad45628dae4fd8565965ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:31.846 [ 1]:0x2 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a2c0e26da3514e62ac2baf92262796ed 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a2c0e26da3514e62ac2baf92262796ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:31.846 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:31.847 [ 0]:0x2 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a2c0e26da3514e62ac2baf92262796ed 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a2c0e26da3514e62ac2baf92262796ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:31.847 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:32.105 [2024-07-13 13:28:06.753909] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:32.105 request: 00:16:32.105 { 00:16:32.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.105 "nsid": 2, 00:16:32.105 "host": "nqn.2016-06.io.spdk:host1", 00:16:32.105 "method": "nvmf_ns_remove_host", 00:16:32.105 "req_id": 1 00:16:32.105 } 00:16:32.105 Got JSON-RPC error response 00:16:32.105 response: 00:16:32.105 { 00:16:32.105 "code": -32602, 00:16:32.105 "message": "Invalid parameters" 00:16:32.105 } 00:16:32.105 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:32.105 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:32.105 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:32.105 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:32.105 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:32.105 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:32.105 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:32.106 [ 0]:0x2 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:32.106 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:32.364 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a2c0e26da3514e62ac2baf92262796ed 00:16:32.364 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a2c0e26da3514e62ac2baf92262796ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:32.364 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:32.364 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:32.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.364 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=250157 00:16:32.364 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:32.365 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.365 13:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 250157 /var/tmp/host.sock 00:16:32.365 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 250157 ']' 00:16:32.365 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:32.365 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:32.365 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:32.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:32.365 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:32.365 13:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:32.365 [2024-07-13 13:28:07.006512] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:32.365 [2024-07-13 13:28:07.006668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250157 ] 00:16:32.365 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.623 [2024-07-13 13:28:07.139977] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.882 [2024-07-13 13:28:07.394653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.817 13:28:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.817 13:28:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:16:33.817 13:28:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.817 13:28:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:34.102 13:28:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3f1ad5ee-2959-4e66-a45f-75dfe971430f 00:16:34.102 13:28:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:34.102 13:28:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3F1AD5EE29594E66A45F75DFE971430F -i 00:16:34.369 13:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 3f0a8621-d213-4b21-b1ef-bb8034602821 00:16:34.369 13:28:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:34.369 13:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3F0A8621D2134B21B1EFBB8034602821 -i 00:16:34.627 13:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:35.193 13:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:35.193 13:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:35.193 13:28:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:35.451 nvme0n1 00:16:35.709 13:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:35.710 13:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:35.968 nvme1n2 00:16:35.968 13:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:35.968 13:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:35.968 13:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:35.968 13:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:35.968 13:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:36.226 13:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:36.226 13:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:36.226 13:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:36.226 13:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:36.484 13:28:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3f1ad5ee-2959-4e66-a45f-75dfe971430f == \3\f\1\a\d\5\e\e\-\2\9\5\9\-\4\e\6\6\-\a\4\5\f\-\7\5\d\f\e\9\7\1\4\3\0\f ]] 00:16:36.484 13:28:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:36.484 13:28:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:36.484 13:28:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:36.742 13:28:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 3f0a8621-d213-4b21-b1ef-bb8034602821 == \3\f\0\a\8\6\2\1\-\d\2\1\3\-\4\b\2\1\-\b\1\e\f\-\b\b\8\0\3\4\6\0\2\8\2\1 ]] 00:16:36.742 13:28:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 250157 00:16:36.742 13:28:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 250157 ']' 00:16:36.742 13:28:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 250157 00:16:36.743 13:28:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:16:36.743 13:28:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:36.743 13:28:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 250157 00:16:36.743 13:28:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:36.743 13:28:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:36.743 13:28:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 250157' 00:16:36.743 killing process with pid 250157 00:16:36.743 13:28:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 250157 00:16:36.743 13:28:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 250157 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:39.273 rmmod nvme_tcp 00:16:39.273 rmmod nvme_fabrics 00:16:39.273 rmmod nvme_keyring 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 248539 ']' 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 248539 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 248539 ']' 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 248539 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 248539 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 248539' 00:16:39.273 killing process with pid 248539 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 248539 00:16:39.273 13:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 248539 00:16:41.173 13:28:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:41.173 13:28:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:41.173 13:28:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:41.173 13:28:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:41.173 13:28:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:41.173 13:28:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.173 13:28:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.173 13:28:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.075 13:28:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:43.075 00:16:43.075 real 0m25.491s 00:16:43.075 user 0m34.554s 00:16:43.075 sys 0m4.480s 00:16:43.075 13:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:43.075 13:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:43.075 ************************************ 00:16:43.075 END TEST nvmf_ns_masking 00:16:43.075 ************************************ 00:16:43.075 13:28:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:43.075 13:28:17 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:16:43.075 13:28:17 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:43.075 13:28:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:43.075 13:28:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:43.075 13:28:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:43.075 ************************************ 00:16:43.075 START TEST nvmf_nvme_cli 00:16:43.075 ************************************ 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:43.075 * Looking for test storage... 00:16:43.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:43.075 13:28:17 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:43.076 13:28:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:45.606 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:45.606 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:45.606 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:45.606 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:45.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:16:45.606 00:16:45.606 --- 10.0.0.2 ping statistics --- 00:16:45.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.606 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:45.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:16:45.606 00:16:45.606 --- 10.0.0.1 ping statistics --- 00:16:45.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.606 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=253055 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 253055 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 253055 ']' 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.606 13:28:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:45.607 [2024-07-13 13:28:20.045480] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:45.607 [2024-07-13 13:28:20.045652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.607 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.607 [2024-07-13 13:28:20.204118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.864 [2024-07-13 13:28:20.471027] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.864 [2024-07-13 13:28:20.471101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.864 [2024-07-13 13:28:20.471129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.864 [2024-07-13 13:28:20.471150] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.864 [2024-07-13 13:28:20.471172] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.864 [2024-07-13 13:28:20.471303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.864 [2024-07-13 13:28:20.471367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.864 [2024-07-13 13:28:20.471412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.864 [2024-07-13 13:28:20.471423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.428 13:28:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.428 13:28:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:16:46.428 13:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:46.428 13:28:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:46.428 13:28:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.428 13:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.428 13:28:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:46.428 13:28:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.428 13:28:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.428 [2024-07-13 13:28:20.972818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.428 13:28:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.428 13:28:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:46.428 13:28:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.428 13:28:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.428 Malloc0 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.428 Malloc1 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.428 [2024-07-13 13:28:21.162441] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.428 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.685 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.685 13:28:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:16:46.685 00:16:46.685 Discovery Log Number of Records 2, Generation counter 2 00:16:46.685 =====Discovery Log Entry 0====== 00:16:46.685 trtype: tcp 00:16:46.685 adrfam: ipv4 00:16:46.685 subtype: current discovery subsystem 00:16:46.685 treq: not required 00:16:46.685 portid: 0 00:16:46.685 trsvcid: 4420 00:16:46.685 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:46.685 traddr: 10.0.0.2 00:16:46.685 eflags: explicit discovery connections, duplicate discovery information 00:16:46.685 sectype: none 00:16:46.685 =====Discovery Log Entry 1====== 00:16:46.685 trtype: tcp 00:16:46.685 adrfam: ipv4 00:16:46.685 subtype: nvme subsystem 00:16:46.685 treq: not required 00:16:46.685 portid: 0 00:16:46.685 trsvcid: 4420 00:16:46.685 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:46.685 traddr: 10.0.0.2 00:16:46.685 eflags: none 00:16:46.685 sectype: none 00:16:46.685 13:28:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:46.685 13:28:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:46.685 13:28:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:46.685 13:28:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:46.685 13:28:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:46.685 13:28:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:46.685 13:28:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:46.685 13:28:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:46.685 13:28:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:46.685 13:28:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:46.685 13:28:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.250 13:28:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:47.250 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:47.250 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.250 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:47.250 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:47.250 13:28:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:49.145 13:28:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:49.145 13:28:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:49.145 13:28:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.145 13:28:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:49.145 13:28:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.145 13:28:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:49.145 13:28:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:49.145 13:28:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:49.145 13:28:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.145 13:28:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:49.402 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:49.402 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.402 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:49.402 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.402 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:49.402 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:49.402 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.402 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:49.402 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:49.402 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.402 13:28:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:49.402 /dev/nvme0n1 ]] 00:16:49.402 13:28:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:49.402 13:28:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:49.402 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:49.402 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.402 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:49.660 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:49.660 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.660 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:49.660 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.660 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:49.660 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:49.660 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.660 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:49.660 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:49.660 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:49.660 13:28:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:49.660 13:28:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:49.917 rmmod nvme_tcp 00:16:49.917 rmmod nvme_fabrics 00:16:49.917 rmmod nvme_keyring 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 253055 ']' 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 253055 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 253055 ']' 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 253055 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 253055 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 253055' 00:16:49.917 killing process with pid 253055 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 253055 00:16:49.917 13:28:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 253055 00:16:51.820 13:28:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:51.820 13:28:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:51.820 13:28:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:51.820 13:28:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:51.820 13:28:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:51.820 13:28:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.820 13:28:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.820 13:28:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.759 13:28:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:53.759 00:16:53.759 real 0m10.501s 00:16:53.759 user 0m21.863s 00:16:53.759 sys 0m2.456s 00:16:53.759 13:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:53.759 13:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.759 ************************************ 00:16:53.759 END TEST nvmf_nvme_cli 00:16:53.759 ************************************ 00:16:53.759 13:28:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:53.759 13:28:28 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:16:53.759 13:28:28 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:53.759 13:28:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:53.759 13:28:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.759 13:28:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:53.759 ************************************ 00:16:53.759 START TEST nvmf_host_management 00:16:53.759 ************************************ 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:53.759 * Looking for test storage... 00:16:53.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:53.759 13:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:53.760 13:28:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:55.665 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:55.665 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:55.665 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:55.665 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:55.665 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:55.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:16:55.924 00:16:55.924 --- 10.0.0.2 ping statistics --- 00:16:55.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.924 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:55.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:16:55.924 00:16:55.924 --- 10.0.0.1 ping statistics --- 00:16:55.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.924 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=255813 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 255813 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 255813 ']' 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.924 13:28:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:55.924 [2024-07-13 13:28:30.580735] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:55.924 [2024-07-13 13:28:30.580889] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.924 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.183 [2024-07-13 13:28:30.717761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:56.441 [2024-07-13 13:28:30.980315] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.441 [2024-07-13 13:28:30.980392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.441 [2024-07-13 13:28:30.980421] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.441 [2024-07-13 13:28:30.980443] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.441 [2024-07-13 13:28:30.980464] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.441 [2024-07-13 13:28:30.980597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.441 [2024-07-13 13:28:30.980655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.442 [2024-07-13 13:28:30.980702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.442 [2024-07-13 13:28:30.980712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.008 [2024-07-13 13:28:31.554323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.008 Malloc0 00:16:57.008 [2024-07-13 13:28:31.665230] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=255989 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 255989 /var/tmp/bdevperf.sock 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 255989 ']' 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:57.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:57.008 { 00:16:57.008 "params": { 00:16:57.008 "name": "Nvme$subsystem", 00:16:57.008 "trtype": "$TEST_TRANSPORT", 00:16:57.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:57.008 "adrfam": "ipv4", 00:16:57.008 "trsvcid": "$NVMF_PORT", 00:16:57.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:57.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:57.008 "hdgst": ${hdgst:-false}, 00:16:57.008 "ddgst": ${ddgst:-false} 00:16:57.008 }, 00:16:57.008 "method": "bdev_nvme_attach_controller" 00:16:57.008 } 00:16:57.008 EOF 00:16:57.008 )") 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:57.008 13:28:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:57.008 "params": { 00:16:57.008 "name": "Nvme0", 00:16:57.008 "trtype": "tcp", 00:16:57.008 "traddr": "10.0.0.2", 00:16:57.008 "adrfam": "ipv4", 00:16:57.008 "trsvcid": "4420", 00:16:57.008 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:57.008 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:57.008 "hdgst": false, 00:16:57.008 "ddgst": false 00:16:57.008 }, 00:16:57.008 "method": "bdev_nvme_attach_controller" 00:16:57.008 }' 00:16:57.266 [2024-07-13 13:28:31.777897] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:57.266 [2024-07-13 13:28:31.778038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid255989 ] 00:16:57.266 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.266 [2024-07-13 13:28:31.908963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.525 [2024-07-13 13:28:32.148062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.091 Running I/O for 10 seconds... 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=3 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:16:58.091 13:28:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:58.349 13:28:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:58.349 13:28:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:58.349 13:28:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:58.349 13:28:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:58.349 13:28:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.349 13:28:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.349 13:28:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.607 13:28:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:16:58.607 13:28:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:16:58.607 13:28:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:58.607 13:28:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:58.607 13:28:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:58.607 13:28:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:58.607 13:28:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.607 13:28:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.607 [2024-07-13 13:28:33.102200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.607 [2024-07-13 13:28:33.102283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.607 [2024-07-13 13:28:33.102305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.607 [2024-07-13 13:28:33.102324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.607 [2024-07-13 13:28:33.102342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.607 [2024-07-13 13:28:33.102360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.607 [2024-07-13 13:28:33.102378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.607 [2024-07-13 13:28:33.102396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:58.607 13:28:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.607 13:28:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:58.607 13:28:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.607 13:28:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:58.607 [2024-07-13 13:28:33.108990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.607 [2024-07-13 13:28:33.109046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.109085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.607 [2024-07-13 13:28:33.109108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.109130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.607 [2024-07-13 13:28:33.109160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.109181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.607 [2024-07-13 13:28:33.109221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.109241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:16:58.607 [2024-07-13 13:28:33.109320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.109351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.109391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.109415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.109440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.109462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.109486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.109524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.109551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.109573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.109596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.109618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.109643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.109665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.109688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.109718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.109742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.109764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.109788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.109810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.109832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.109879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.109907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.109935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.109960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.109981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.110959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.110981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.607 [2024-07-13 13:28:33.111814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.607 [2024-07-13 13:28:33.111837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.608 [2024-07-13 13:28:33.111860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.608 [2024-07-13 13:28:33.111893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.608 [2024-07-13 13:28:33.111916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.608 [2024-07-13 13:28:33.111939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.608 [2024-07-13 13:28:33.111961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.608 [2024-07-13 13:28:33.111984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.608 [2024-07-13 13:28:33.112005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.608 [2024-07-13 13:28:33.112028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.608 [2024-07-13 13:28:33.112049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.608 [2024-07-13 13:28:33.112072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.608 [2024-07-13 13:28:33.112093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.608 [2024-07-13 13:28:33.112116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.608 [2024-07-13 13:28:33.112137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.608 [2024-07-13 13:28:33.112160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.608 [2024-07-13 13:28:33.112181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.608 [2024-07-13 13:28:33.112205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.608 [2024-07-13 13:28:33.112233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.608 [2024-07-13 13:28:33.112258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.608 [2024-07-13 13:28:33.112280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.608 [2024-07-13 13:28:33.112303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.608 [2024-07-13 13:28:33.112325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.608 [2024-07-13 13:28:33.112623] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2c80 was disconnected and freed. reset controller. 00:16:58.608 [2024-07-13 13:28:33.113898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:58.608 13:28:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.608 13:28:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:58.608 task offset: 57344 on job bdev=Nvme0n1 fails 00:16:58.608 00:16:58.608 Latency(us) 00:16:58.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.608 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:58.608 Job: Nvme0n1 ended in about 0.37 seconds with error 00:16:58.608 Verification LBA range: start 0x0 length 0x400 00:16:58.608 Nvme0n1 : 0.37 1201.00 75.06 171.57 0.00 45132.09 4417.61 41360.50 00:16:58.608 =================================================================================================================== 00:16:58.608 Total : 1201.00 75.06 171.57 0.00 45132.09 4417.61 41360.50 00:16:58.608 [2024-07-13 13:28:33.118967] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:58.608 [2024-07-13 13:28:33.119016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:16:58.608 [2024-07-13 13:28:33.176122] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:59.539 13:28:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 255989 00:16:59.539 13:28:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:59.539 13:28:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:59.539 13:28:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:59.539 13:28:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:59.539 13:28:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:59.539 13:28:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.539 13:28:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.539 { 00:16:59.539 "params": { 00:16:59.539 "name": "Nvme$subsystem", 00:16:59.539 "trtype": "$TEST_TRANSPORT", 00:16:59.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.539 "adrfam": "ipv4", 00:16:59.539 "trsvcid": "$NVMF_PORT", 00:16:59.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.539 "hdgst": ${hdgst:-false}, 00:16:59.539 "ddgst": ${ddgst:-false} 00:16:59.539 }, 00:16:59.539 "method": "bdev_nvme_attach_controller" 00:16:59.539 } 00:16:59.539 EOF 00:16:59.539 )") 00:16:59.539 13:28:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:59.539 13:28:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:59.539 13:28:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:59.539 13:28:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:59.539 "params": { 00:16:59.539 "name": "Nvme0", 00:16:59.539 "trtype": "tcp", 00:16:59.539 "traddr": "10.0.0.2", 00:16:59.539 "adrfam": "ipv4", 00:16:59.539 "trsvcid": "4420", 00:16:59.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:59.539 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:59.539 "hdgst": false, 00:16:59.539 "ddgst": false 00:16:59.539 }, 00:16:59.539 "method": "bdev_nvme_attach_controller" 00:16:59.539 }' 00:16:59.539 [2024-07-13 13:28:34.198431] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:59.539 [2024-07-13 13:28:34.198580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid256285 ] 00:16:59.539 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.796 [2024-07-13 13:28:34.323913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.054 [2024-07-13 13:28:34.566322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.312 Running I/O for 1 seconds... 00:17:01.683 00:17:01.683 Latency(us) 00:17:01.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.683 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:01.683 Verification LBA range: start 0x0 length 0x400 00:17:01.683 Nvme0n1 : 1.03 1243.24 77.70 0.00 0.00 50633.75 11893.57 42331.40 00:17:01.683 =================================================================================================================== 00:17:01.683 Total : 1243.24 77.70 0.00 0.00 50633.75 11893.57 42331.40 00:17:02.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 255989 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:02.616 rmmod nvme_tcp 00:17:02.616 rmmod nvme_fabrics 00:17:02.616 rmmod nvme_keyring 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 255813 ']' 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 255813 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 255813 ']' 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 255813 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 255813 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 255813' 00:17:02.616 killing process with pid 255813 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 255813 00:17:02.616 13:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 255813 00:17:03.990 [2024-07-13 13:28:38.457876] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:03.990 13:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:03.990 13:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:03.990 13:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:03.990 13:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:03.990 13:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:03.990 13:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.990 13:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.990 13:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.892 13:28:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:05.892 13:28:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:05.892 00:17:05.892 real 0m12.290s 00:17:05.892 user 0m34.096s 00:17:05.892 sys 0m3.075s 00:17:05.892 13:28:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:05.892 13:28:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:05.892 ************************************ 00:17:05.892 END TEST nvmf_host_management 00:17:05.892 ************************************ 00:17:05.892 13:28:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:05.892 13:28:40 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:05.892 13:28:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:05.892 13:28:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:05.893 13:28:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:06.151 ************************************ 00:17:06.151 START TEST nvmf_lvol 00:17:06.151 ************************************ 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:06.151 * Looking for test storage... 00:17:06.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:06.151 13:28:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:08.050 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.050 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:08.051 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:08.051 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:08.051 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:08.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:17:08.051 00:17:08.051 --- 10.0.0.2 ping statistics --- 00:17:08.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.051 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:17:08.051 00:17:08.051 --- 10.0.0.1 ping statistics --- 00:17:08.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.051 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=258698 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 258698 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 258698 ']' 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.051 13:28:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:08.309 [2024-07-13 13:28:42.850069] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:08.309 [2024-07-13 13:28:42.850201] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.309 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.309 [2024-07-13 13:28:42.980888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:08.566 [2024-07-13 13:28:43.245182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.566 [2024-07-13 13:28:43.245259] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.566 [2024-07-13 13:28:43.245293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.566 [2024-07-13 13:28:43.245313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.566 [2024-07-13 13:28:43.245335] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.566 [2024-07-13 13:28:43.245461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.566 [2024-07-13 13:28:43.245529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.566 [2024-07-13 13:28:43.245538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.130 13:28:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.130 13:28:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:17:09.130 13:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:09.130 13:28:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:09.130 13:28:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:09.130 13:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.130 13:28:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:09.388 [2024-07-13 13:28:44.070186] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.388 13:28:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:10.007 13:28:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:10.007 13:28:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:10.265 13:28:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:10.265 13:28:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:10.522 13:28:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:10.780 13:28:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bfb0d5a0-a520-4a40-b708-25bca00445bc 00:17:10.780 13:28:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bfb0d5a0-a520-4a40-b708-25bca00445bc lvol 20 00:17:11.038 13:28:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e17790e0-2a6f-406a-8f71-8f50ee219c36 00:17:11.038 13:28:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:11.296 13:28:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e17790e0-2a6f-406a-8f71-8f50ee219c36 00:17:11.553 13:28:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:11.811 [2024-07-13 13:28:46.484056] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.811 13:28:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:12.069 13:28:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=259184 00:17:12.069 13:28:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:12.069 13:28:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:12.327 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.261 13:28:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e17790e0-2a6f-406a-8f71-8f50ee219c36 MY_SNAPSHOT 00:17:13.519 13:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=03d92e87-e015-4890-ad03-e70a5dbf012e 00:17:13.519 13:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e17790e0-2a6f-406a-8f71-8f50ee219c36 30 00:17:13.777 13:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 03d92e87-e015-4890-ad03-e70a5dbf012e MY_CLONE 00:17:14.035 13:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3e19ee4b-7318-49b6-8eb2-6931694df9e4 00:17:14.035 13:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3e19ee4b-7318-49b6-8eb2-6931694df9e4 00:17:14.968 13:28:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 259184 00:17:23.076 Initializing NVMe Controllers 00:17:23.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:23.076 Controller IO queue size 128, less than required. 00:17:23.076 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:23.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:23.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:23.076 Initialization complete. Launching workers. 00:17:23.076 ======================================================== 00:17:23.076 Latency(us) 00:17:23.076 Device Information : IOPS MiB/s Average min max 00:17:23.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8327.80 32.53 15370.05 958.66 191860.52 00:17:23.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8167.60 31.90 15682.17 3844.25 150427.03 00:17:23.076 ======================================================== 00:17:23.076 Total : 16495.39 64.44 15524.60 958.66 191860.52 00:17:23.076 00:17:23.076 13:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:23.076 13:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e17790e0-2a6f-406a-8f71-8f50ee219c36 00:17:23.334 13:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bfb0d5a0-a520-4a40-b708-25bca00445bc 00:17:23.592 13:28:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:23.592 13:28:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:23.593 rmmod nvme_tcp 00:17:23.593 rmmod nvme_fabrics 00:17:23.593 rmmod nvme_keyring 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 258698 ']' 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 258698 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 258698 ']' 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 258698 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 258698 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 258698' 00:17:23.593 killing process with pid 258698 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 258698 00:17:23.593 13:28:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 258698 00:17:25.492 13:28:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:25.492 13:28:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:25.492 13:28:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:25.492 13:28:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:25.492 13:28:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:25.492 13:28:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.492 13:28:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.492 13:28:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:27.394 00:17:27.394 real 0m21.194s 00:17:27.394 user 1m10.840s 00:17:27.394 sys 0m5.418s 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:27.394 ************************************ 00:17:27.394 END TEST nvmf_lvol 00:17:27.394 ************************************ 00:17:27.394 13:29:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:27.394 13:29:01 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:27.394 13:29:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:27.394 13:29:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:27.394 13:29:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:27.394 ************************************ 00:17:27.394 START TEST nvmf_lvs_grow 00:17:27.394 ************************************ 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:27.394 * Looking for test storage... 00:17:27.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:27.394 13:29:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.293 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:29.293 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:29.294 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:29.294 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:29.294 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:29.294 13:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:29.294 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:29.294 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:29.294 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:29.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:17:29.294 00:17:29.294 --- 10.0.0.2 ping statistics --- 00:17:29.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.294 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:17:29.294 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:29.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:17:29.294 00:17:29.294 --- 10.0.0.1 ping statistics --- 00:17:29.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.294 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:17:29.294 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=262572 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 262572 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 262572 ']' 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.573 13:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.573 [2024-07-13 13:29:04.153265] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:29.573 [2024-07-13 13:29:04.153416] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.573 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.573 [2024-07-13 13:29:04.289488] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.857 [2024-07-13 13:29:04.545037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.857 [2024-07-13 13:29:04.545119] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.857 [2024-07-13 13:29:04.545147] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.857 [2024-07-13 13:29:04.545172] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.857 [2024-07-13 13:29:04.545205] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.857 [2024-07-13 13:29:04.545255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.423 13:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:30.423 13:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:17:30.423 13:29:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:30.423 13:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:30.423 13:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:30.423 13:29:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.423 13:29:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:30.680 [2024-07-13 13:29:05.323188] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.680 13:29:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:30.680 13:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:30.680 13:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:30.680 13:29:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:30.681 ************************************ 00:17:30.681 START TEST lvs_grow_clean 00:17:30.681 ************************************ 00:17:30.681 13:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:17:30.681 13:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:30.681 13:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:30.681 13:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:30.681 13:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:30.681 13:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:30.681 13:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:30.681 13:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:30.681 13:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:30.681 13:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:30.938 13:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:30.938 13:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:31.196 13:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d9826f5e-28ce-49d8-a657-109e221b90ed 00:17:31.196 13:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9826f5e-28ce-49d8-a657-109e221b90ed 00:17:31.196 13:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:31.454 13:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:31.454 13:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:31.454 13:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d9826f5e-28ce-49d8-a657-109e221b90ed lvol 150 00:17:31.711 13:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ce72473f-4b0d-4a20-979e-ac7f631a011a 00:17:31.711 13:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:31.711 13:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:31.969 [2024-07-13 13:29:06.608488] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:31.969 [2024-07-13 13:29:06.608596] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:31.969 true 00:17:31.969 13:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9826f5e-28ce-49d8-a657-109e221b90ed 00:17:31.969 13:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:32.225 13:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:32.225 13:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:32.481 13:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ce72473f-4b0d-4a20-979e-ac7f631a011a 00:17:32.738 13:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:32.996 [2024-07-13 13:29:07.587663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.996 13:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:33.255 13:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=263135 00:17:33.255 13:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:33.255 13:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:33.255 13:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 263135 /var/tmp/bdevperf.sock 00:17:33.255 13:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 263135 ']' 00:17:33.255 13:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:33.255 13:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.255 13:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:33.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:33.255 13:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.255 13:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:33.255 [2024-07-13 13:29:07.922283] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:33.255 [2024-07-13 13:29:07.922431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid263135 ] 00:17:33.255 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.513 [2024-07-13 13:29:08.051714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.771 [2024-07-13 13:29:08.287174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.337 13:29:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.337 13:29:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:34.337 13:29:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:34.621 Nvme0n1 00:17:34.879 13:29:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:35.138 [ 00:17:35.138 { 00:17:35.138 "name": "Nvme0n1", 00:17:35.138 "aliases": [ 00:17:35.138 "ce72473f-4b0d-4a20-979e-ac7f631a011a" 00:17:35.138 ], 00:17:35.138 "product_name": "NVMe disk", 00:17:35.138 "block_size": 4096, 00:17:35.138 "num_blocks": 38912, 00:17:35.138 "uuid": "ce72473f-4b0d-4a20-979e-ac7f631a011a", 00:17:35.139 "assigned_rate_limits": { 00:17:35.139 "rw_ios_per_sec": 0, 00:17:35.139 "rw_mbytes_per_sec": 0, 00:17:35.139 "r_mbytes_per_sec": 0, 00:17:35.139 "w_mbytes_per_sec": 0 00:17:35.139 }, 00:17:35.139 "claimed": false, 00:17:35.139 "zoned": false, 00:17:35.139 "supported_io_types": { 00:17:35.139 "read": true, 00:17:35.139 "write": true, 00:17:35.139 "unmap": true, 00:17:35.139 "flush": true, 00:17:35.139 "reset": true, 00:17:35.139 "nvme_admin": true, 00:17:35.139 "nvme_io": true, 00:17:35.139 "nvme_io_md": false, 00:17:35.139 "write_zeroes": true, 00:17:35.139 "zcopy": false, 00:17:35.139 "get_zone_info": false, 00:17:35.139 "zone_management": false, 00:17:35.139 "zone_append": false, 00:17:35.139 "compare": true, 00:17:35.139 "compare_and_write": true, 00:17:35.139 "abort": true, 00:17:35.139 "seek_hole": false, 00:17:35.139 "seek_data": false, 00:17:35.139 "copy": true, 00:17:35.139 "nvme_iov_md": false 00:17:35.139 }, 00:17:35.139 "memory_domains": [ 00:17:35.139 { 00:17:35.139 "dma_device_id": "system", 00:17:35.139 "dma_device_type": 1 00:17:35.139 } 00:17:35.139 ], 00:17:35.139 "driver_specific": { 00:17:35.139 "nvme": [ 00:17:35.139 { 00:17:35.139 "trid": { 00:17:35.139 "trtype": "TCP", 00:17:35.139 "adrfam": "IPv4", 00:17:35.139 "traddr": "10.0.0.2", 00:17:35.139 "trsvcid": "4420", 00:17:35.139 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:35.139 }, 00:17:35.139 "ctrlr_data": { 00:17:35.139 "cntlid": 1, 00:17:35.139 "vendor_id": "0x8086", 00:17:35.139 "model_number": "SPDK bdev Controller", 00:17:35.139 "serial_number": "SPDK0", 00:17:35.139 "firmware_revision": "24.09", 00:17:35.139 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:35.139 "oacs": { 00:17:35.139 "security": 0, 00:17:35.139 "format": 0, 00:17:35.139 "firmware": 0, 00:17:35.139 "ns_manage": 0 00:17:35.139 }, 00:17:35.139 "multi_ctrlr": true, 00:17:35.139 "ana_reporting": false 00:17:35.139 }, 00:17:35.139 "vs": { 00:17:35.139 "nvme_version": "1.3" 00:17:35.139 }, 00:17:35.139 "ns_data": { 00:17:35.139 "id": 1, 00:17:35.139 "can_share": true 00:17:35.139 } 00:17:35.139 } 00:17:35.139 ], 00:17:35.139 "mp_policy": "active_passive" 00:17:35.139 } 00:17:35.139 } 00:17:35.139 ] 00:17:35.139 13:29:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=263277 00:17:35.139 13:29:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:35.139 13:29:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:35.139 Running I/O for 10 seconds... 00:17:36.073 Latency(us) 00:17:36.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.073 Nvme0n1 : 1.00 10037.00 39.21 0.00 0.00 0.00 0.00 0.00 00:17:36.073 =================================================================================================================== 00:17:36.073 Total : 10037.00 39.21 0.00 0.00 0.00 0.00 0.00 00:17:36.073 00:17:37.007 13:29:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d9826f5e-28ce-49d8-a657-109e221b90ed 00:17:37.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.265 Nvme0n1 : 2.00 10352.50 40.44 0.00 0.00 0.00 0.00 0.00 00:17:37.265 =================================================================================================================== 00:17:37.265 Total : 10352.50 40.44 0.00 0.00 0.00 0.00 0.00 00:17:37.265 00:17:37.265 true 00:17:37.265 13:29:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9826f5e-28ce-49d8-a657-109e221b90ed 00:17:37.265 13:29:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:37.523 13:29:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:37.524 13:29:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:37.524 13:29:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 263277 00:17:38.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.090 Nvme0n1 : 3.00 10415.33 40.68 0.00 0.00 0.00 0.00 0.00 00:17:38.090 =================================================================================================================== 00:17:38.090 Total : 10415.33 40.68 0.00 0.00 0.00 0.00 0.00 00:17:38.090 00:17:39.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.462 Nvme0n1 : 4.00 10446.75 40.81 0.00 0.00 0.00 0.00 0.00 00:17:39.462 =================================================================================================================== 00:17:39.462 Total : 10446.75 40.81 0.00 0.00 0.00 0.00 0.00 00:17:39.462 00:17:40.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.395 Nvme0n1 : 5.00 10491.00 40.98 0.00 0.00 0.00 0.00 0.00 00:17:40.395 =================================================================================================================== 00:17:40.395 Total : 10491.00 40.98 0.00 0.00 0.00 0.00 0.00 00:17:40.395 00:17:41.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.329 Nvme0n1 : 6.00 10541.67 41.18 0.00 0.00 0.00 0.00 0.00 00:17:41.329 =================================================================================================================== 00:17:41.329 Total : 10541.67 41.18 0.00 0.00 0.00 0.00 0.00 00:17:41.329 00:17:42.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.264 Nvme0n1 : 7.00 10614.14 41.46 0.00 0.00 0.00 0.00 0.00 00:17:42.264 =================================================================================================================== 00:17:42.264 Total : 10614.14 41.46 0.00 0.00 0.00 0.00 0.00 00:17:42.264 00:17:43.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.200 Nvme0n1 : 8.00 10620.88 41.49 0.00 0.00 0.00 0.00 0.00 00:17:43.200 =================================================================================================================== 00:17:43.200 Total : 10620.88 41.49 0.00 0.00 0.00 0.00 0.00 00:17:43.200 00:17:44.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.163 Nvme0n1 : 9.00 10668.44 41.67 0.00 0.00 0.00 0.00 0.00 00:17:44.163 =================================================================================================================== 00:17:44.163 Total : 10668.44 41.67 0.00 0.00 0.00 0.00 0.00 00:17:44.163 00:17:45.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.096 Nvme0n1 : 10.00 10674.90 41.70 0.00 0.00 0.00 0.00 0.00 00:17:45.096 =================================================================================================================== 00:17:45.096 Total : 10674.90 41.70 0.00 0.00 0.00 0.00 0.00 00:17:45.096 00:17:45.096 00:17:45.096 Latency(us) 00:17:45.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.096 Nvme0n1 : 10.00 10682.57 41.73 0.00 0.00 11975.09 3810.80 23884.23 00:17:45.096 =================================================================================================================== 00:17:45.096 Total : 10682.57 41.73 0.00 0.00 11975.09 3810.80 23884.23 00:17:45.096 0 00:17:45.096 13:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 263135 00:17:45.096 13:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 263135 ']' 00:17:45.096 13:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 263135 00:17:45.096 13:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:17:45.096 13:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:45.096 13:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 263135 00:17:45.354 13:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:45.354 13:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:45.354 13:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 263135' 00:17:45.354 killing process with pid 263135 00:17:45.354 13:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 263135 00:17:45.354 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.354 00:17:45.354 Latency(us) 00:17:45.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.354 =================================================================================================================== 00:17:45.354 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.354 13:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 263135 00:17:46.287 13:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:46.545 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:46.803 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9826f5e-28ce-49d8-a657-109e221b90ed 00:17:46.803 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:47.061 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:47.061 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:47.061 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:47.319 [2024-07-13 13:29:21.905892] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:47.319 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9826f5e-28ce-49d8-a657-109e221b90ed 00:17:47.319 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:47.319 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9826f5e-28ce-49d8-a657-109e221b90ed 00:17:47.319 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.319 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.319 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.319 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.319 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.319 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.319 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.319 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:47.319 13:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9826f5e-28ce-49d8-a657-109e221b90ed 00:17:47.577 request: 00:17:47.577 { 00:17:47.577 "uuid": "d9826f5e-28ce-49d8-a657-109e221b90ed", 00:17:47.577 "method": "bdev_lvol_get_lvstores", 00:17:47.577 "req_id": 1 00:17:47.577 } 00:17:47.577 Got JSON-RPC error response 00:17:47.577 response: 00:17:47.577 { 00:17:47.577 "code": -19, 00:17:47.577 "message": "No such device" 00:17:47.577 } 00:17:47.577 13:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:47.577 13:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:47.577 13:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:47.577 13:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:47.577 13:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:47.835 aio_bdev 00:17:47.835 13:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ce72473f-4b0d-4a20-979e-ac7f631a011a 00:17:47.835 13:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=ce72473f-4b0d-4a20-979e-ac7f631a011a 00:17:47.835 13:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:47.835 13:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:17:47.835 13:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:47.835 13:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:47.835 13:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:48.093 13:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ce72473f-4b0d-4a20-979e-ac7f631a011a -t 2000 00:17:48.350 [ 00:17:48.350 { 00:17:48.350 "name": "ce72473f-4b0d-4a20-979e-ac7f631a011a", 00:17:48.350 "aliases": [ 00:17:48.350 "lvs/lvol" 00:17:48.350 ], 00:17:48.350 "product_name": "Logical Volume", 00:17:48.350 "block_size": 4096, 00:17:48.350 "num_blocks": 38912, 00:17:48.350 "uuid": "ce72473f-4b0d-4a20-979e-ac7f631a011a", 00:17:48.350 "assigned_rate_limits": { 00:17:48.350 "rw_ios_per_sec": 0, 00:17:48.350 "rw_mbytes_per_sec": 0, 00:17:48.350 "r_mbytes_per_sec": 0, 00:17:48.350 "w_mbytes_per_sec": 0 00:17:48.350 }, 00:17:48.350 "claimed": false, 00:17:48.350 "zoned": false, 00:17:48.350 "supported_io_types": { 00:17:48.350 "read": true, 00:17:48.350 "write": true, 00:17:48.350 "unmap": true, 00:17:48.350 "flush": false, 00:17:48.350 "reset": true, 00:17:48.350 "nvme_admin": false, 00:17:48.350 "nvme_io": false, 00:17:48.350 "nvme_io_md": false, 00:17:48.350 "write_zeroes": true, 00:17:48.350 "zcopy": false, 00:17:48.350 "get_zone_info": false, 00:17:48.350 "zone_management": false, 00:17:48.350 "zone_append": false, 00:17:48.350 "compare": false, 00:17:48.350 "compare_and_write": false, 00:17:48.350 "abort": false, 00:17:48.350 "seek_hole": true, 00:17:48.350 "seek_data": true, 00:17:48.350 "copy": false, 00:17:48.350 "nvme_iov_md": false 00:17:48.350 }, 00:17:48.350 "driver_specific": { 00:17:48.350 "lvol": { 00:17:48.350 "lvol_store_uuid": "d9826f5e-28ce-49d8-a657-109e221b90ed", 00:17:48.350 "base_bdev": "aio_bdev", 00:17:48.350 "thin_provision": false, 00:17:48.350 "num_allocated_clusters": 38, 00:17:48.350 "snapshot": false, 00:17:48.350 "clone": false, 00:17:48.350 "esnap_clone": false 00:17:48.350 } 00:17:48.350 } 00:17:48.350 } 00:17:48.350 ] 00:17:48.350 13:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:17:48.350 13:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9826f5e-28ce-49d8-a657-109e221b90ed 00:17:48.350 13:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:48.606 13:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:48.606 13:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d9826f5e-28ce-49d8-a657-109e221b90ed 00:17:48.606 13:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:48.862 13:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:48.862 13:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ce72473f-4b0d-4a20-979e-ac7f631a011a 00:17:49.118 13:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d9826f5e-28ce-49d8-a657-109e221b90ed 00:17:49.375 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:49.631 00:17:49.631 real 0m18.927s 00:17:49.631 user 0m18.719s 00:17:49.631 sys 0m1.899s 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:49.631 ************************************ 00:17:49.631 END TEST lvs_grow_clean 00:17:49.631 ************************************ 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:49.631 ************************************ 00:17:49.631 START TEST lvs_grow_dirty 00:17:49.631 ************************************ 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:49.631 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:50.194 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:50.194 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:50.194 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1baf0f9c-2b05-4ccd-837f-9173fdf88701 00:17:50.194 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1baf0f9c-2b05-4ccd-837f-9173fdf88701 00:17:50.194 13:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:50.450 13:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:50.450 13:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:50.450 13:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1baf0f9c-2b05-4ccd-837f-9173fdf88701 lvol 150 00:17:50.707 13:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=692f12f3-a65c-49c9-b7c9-b12f12de4aa5 00:17:50.707 13:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:50.707 13:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:50.965 [2024-07-13 13:29:25.677576] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:50.965 [2024-07-13 13:29:25.677680] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:50.965 true 00:17:50.965 13:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1baf0f9c-2b05-4ccd-837f-9173fdf88701 00:17:50.965 13:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:51.222 13:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:51.222 13:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:51.480 13:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 692f12f3-a65c-49c9-b7c9-b12f12de4aa5 00:17:51.737 13:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:52.302 [2024-07-13 13:29:26.745017] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.302 13:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:52.302 13:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=265440 00:17:52.302 13:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:52.302 13:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:52.302 13:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 265440 /var/tmp/bdevperf.sock 00:17:52.302 13:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 265440 ']' 00:17:52.302 13:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.302 13:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.302 13:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.302 13:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.302 13:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:52.560 [2024-07-13 13:29:27.094553] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:52.560 [2024-07-13 13:29:27.094697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid265440 ] 00:17:52.560 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.560 [2024-07-13 13:29:27.226416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.817 [2024-07-13 13:29:27.479130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.383 13:29:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.383 13:29:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:53.383 13:29:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:53.948 Nvme0n1 00:17:53.948 13:29:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:54.206 [ 00:17:54.206 { 00:17:54.206 "name": "Nvme0n1", 00:17:54.206 "aliases": [ 00:17:54.206 "692f12f3-a65c-49c9-b7c9-b12f12de4aa5" 00:17:54.206 ], 00:17:54.206 "product_name": "NVMe disk", 00:17:54.206 "block_size": 4096, 00:17:54.206 "num_blocks": 38912, 00:17:54.206 "uuid": "692f12f3-a65c-49c9-b7c9-b12f12de4aa5", 00:17:54.206 "assigned_rate_limits": { 00:17:54.206 "rw_ios_per_sec": 0, 00:17:54.206 "rw_mbytes_per_sec": 0, 00:17:54.206 "r_mbytes_per_sec": 0, 00:17:54.206 "w_mbytes_per_sec": 0 00:17:54.206 }, 00:17:54.206 "claimed": false, 00:17:54.206 "zoned": false, 00:17:54.206 "supported_io_types": { 00:17:54.206 "read": true, 00:17:54.206 "write": true, 00:17:54.206 "unmap": true, 00:17:54.206 "flush": true, 00:17:54.206 "reset": true, 00:17:54.206 "nvme_admin": true, 00:17:54.206 "nvme_io": true, 00:17:54.206 "nvme_io_md": false, 00:17:54.206 "write_zeroes": true, 00:17:54.206 "zcopy": false, 00:17:54.206 "get_zone_info": false, 00:17:54.206 "zone_management": false, 00:17:54.206 "zone_append": false, 00:17:54.206 "compare": true, 00:17:54.206 "compare_and_write": true, 00:17:54.206 "abort": true, 00:17:54.206 "seek_hole": false, 00:17:54.206 "seek_data": false, 00:17:54.206 "copy": true, 00:17:54.206 "nvme_iov_md": false 00:17:54.206 }, 00:17:54.206 "memory_domains": [ 00:17:54.206 { 00:17:54.206 "dma_device_id": "system", 00:17:54.206 "dma_device_type": 1 00:17:54.206 } 00:17:54.206 ], 00:17:54.206 "driver_specific": { 00:17:54.206 "nvme": [ 00:17:54.206 { 00:17:54.206 "trid": { 00:17:54.206 "trtype": "TCP", 00:17:54.206 "adrfam": "IPv4", 00:17:54.206 "traddr": "10.0.0.2", 00:17:54.206 "trsvcid": "4420", 00:17:54.206 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:54.206 }, 00:17:54.206 "ctrlr_data": { 00:17:54.206 "cntlid": 1, 00:17:54.206 "vendor_id": "0x8086", 00:17:54.206 "model_number": "SPDK bdev Controller", 00:17:54.206 "serial_number": "SPDK0", 00:17:54.206 "firmware_revision": "24.09", 00:17:54.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:54.206 "oacs": { 00:17:54.206 "security": 0, 00:17:54.206 "format": 0, 00:17:54.206 "firmware": 0, 00:17:54.206 "ns_manage": 0 00:17:54.206 }, 00:17:54.206 "multi_ctrlr": true, 00:17:54.206 "ana_reporting": false 00:17:54.206 }, 00:17:54.206 "vs": { 00:17:54.206 "nvme_version": "1.3" 00:17:54.206 }, 00:17:54.206 "ns_data": { 00:17:54.206 "id": 1, 00:17:54.206 "can_share": true 00:17:54.206 } 00:17:54.206 } 00:17:54.206 ], 00:17:54.206 "mp_policy": "active_passive" 00:17:54.206 } 00:17:54.206 } 00:17:54.206 ] 00:17:54.206 13:29:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=265583 00:17:54.206 13:29:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:54.206 13:29:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:54.206 Running I/O for 10 seconds... 00:17:55.138 Latency(us) 00:17:55.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:55.138 Nvme0n1 : 1.00 11177.00 43.66 0.00 0.00 0.00 0.00 0.00 00:17:55.138 =================================================================================================================== 00:17:55.138 Total : 11177.00 43.66 0.00 0.00 0.00 0.00 0.00 00:17:55.138 00:17:56.070 13:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1baf0f9c-2b05-4ccd-837f-9173fdf88701 00:17:56.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.327 Nvme0n1 : 2.00 11082.00 43.29 0.00 0.00 0.00 0.00 0.00 00:17:56.327 =================================================================================================================== 00:17:56.327 Total : 11082.00 43.29 0.00 0.00 0.00 0.00 0.00 00:17:56.327 00:17:56.327 true 00:17:56.327 13:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1baf0f9c-2b05-4ccd-837f-9173fdf88701 00:17:56.327 13:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:56.584 13:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:56.584 13:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:56.584 13:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 265583 00:17:57.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.148 Nvme0n1 : 3.00 11200.00 43.75 0.00 0.00 0.00 0.00 0.00 00:17:57.148 =================================================================================================================== 00:17:57.148 Total : 11200.00 43.75 0.00 0.00 0.00 0.00 0.00 00:17:57.148 00:17:58.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.137 Nvme0n1 : 4.00 11180.00 43.67 0.00 0.00 0.00 0.00 0.00 00:17:58.137 =================================================================================================================== 00:17:58.137 Total : 11180.00 43.67 0.00 0.00 0.00 0.00 0.00 00:17:58.137 00:17:59.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.508 Nvme0n1 : 5.00 11180.40 43.67 0.00 0.00 0.00 0.00 0.00 00:17:59.508 =================================================================================================================== 00:17:59.508 Total : 11180.40 43.67 0.00 0.00 0.00 0.00 0.00 00:17:59.508 00:18:00.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.443 Nvme0n1 : 6.00 11238.00 43.90 0.00 0.00 0.00 0.00 0.00 00:18:00.443 =================================================================================================================== 00:18:00.443 Total : 11238.00 43.90 0.00 0.00 0.00 0.00 0.00 00:18:00.443 00:18:01.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:01.384 Nvme0n1 : 7.00 11249.00 43.94 0.00 0.00 0.00 0.00 0.00 00:18:01.384 =================================================================================================================== 00:18:01.384 Total : 11249.00 43.94 0.00 0.00 0.00 0.00 0.00 00:18:01.384 00:18:02.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:02.317 Nvme0n1 : 8.00 11299.75 44.14 0.00 0.00 0.00 0.00 0.00 00:18:02.317 =================================================================================================================== 00:18:02.317 Total : 11299.75 44.14 0.00 0.00 0.00 0.00 0.00 00:18:02.317 00:18:03.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:03.251 Nvme0n1 : 9.00 11300.78 44.14 0.00 0.00 0.00 0.00 0.00 00:18:03.251 =================================================================================================================== 00:18:03.251 Total : 11300.78 44.14 0.00 0.00 0.00 0.00 0.00 00:18:03.251 00:18:04.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.185 Nvme0n1 : 10.00 11342.10 44.31 0.00 0.00 0.00 0.00 0.00 00:18:04.185 =================================================================================================================== 00:18:04.185 Total : 11342.10 44.31 0.00 0.00 0.00 0.00 0.00 00:18:04.185 00:18:04.185 00:18:04.185 Latency(us) 00:18:04.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.185 Nvme0n1 : 10.01 11343.58 44.31 0.00 0.00 11276.99 2900.57 22233.69 00:18:04.185 =================================================================================================================== 00:18:04.185 Total : 11343.58 44.31 0.00 0.00 11276.99 2900.57 22233.69 00:18:04.185 0 00:18:04.185 13:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 265440 00:18:04.185 13:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 265440 ']' 00:18:04.185 13:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 265440 00:18:04.185 13:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:18:04.185 13:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.185 13:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 265440 00:18:04.185 13:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:04.186 13:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:04.186 13:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 265440' 00:18:04.186 killing process with pid 265440 00:18:04.186 13:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 265440 00:18:04.186 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.186 00:18:04.186 Latency(us) 00:18:04.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.186 =================================================================================================================== 00:18:04.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.186 13:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 265440 00:18:05.560 13:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:05.560 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:05.817 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1baf0f9c-2b05-4ccd-837f-9173fdf88701 00:18:05.817 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:06.383 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:06.383 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:06.383 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 262572 00:18:06.383 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 262572 00:18:06.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 262572 Killed "${NVMF_APP[@]}" "$@" 00:18:06.383 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:06.383 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:06.383 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:06.383 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:06.383 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:06.383 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=267035 00:18:06.383 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:06.383 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 267035 00:18:06.383 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 267035 ']' 00:18:06.383 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.384 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.384 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.384 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.384 13:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:06.384 [2024-07-13 13:29:40.975919] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:06.384 [2024-07-13 13:29:40.976057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.384 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.384 [2024-07-13 13:29:41.118110] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.642 [2024-07-13 13:29:41.359303] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.642 [2024-07-13 13:29:41.359371] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.642 [2024-07-13 13:29:41.359394] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.642 [2024-07-13 13:29:41.359415] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.642 [2024-07-13 13:29:41.359432] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.642 [2024-07-13 13:29:41.359480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.208 13:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.208 13:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:18:07.208 13:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:07.208 13:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:07.208 13:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:07.465 13:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.465 13:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:07.465 [2024-07-13 13:29:42.192579] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:07.465 [2024-07-13 13:29:42.192827] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:07.465 [2024-07-13 13:29:42.192952] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:07.724 13:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:07.724 13:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 692f12f3-a65c-49c9-b7c9-b12f12de4aa5 00:18:07.724 13:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=692f12f3-a65c-49c9-b7c9-b12f12de4aa5 00:18:07.724 13:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:07.724 13:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:07.724 13:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:07.724 13:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:07.724 13:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:07.982 13:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 692f12f3-a65c-49c9-b7c9-b12f12de4aa5 -t 2000 00:18:08.240 [ 00:18:08.240 { 00:18:08.240 "name": "692f12f3-a65c-49c9-b7c9-b12f12de4aa5", 00:18:08.240 "aliases": [ 00:18:08.240 "lvs/lvol" 00:18:08.240 ], 00:18:08.240 "product_name": "Logical Volume", 00:18:08.240 "block_size": 4096, 00:18:08.240 "num_blocks": 38912, 00:18:08.240 "uuid": "692f12f3-a65c-49c9-b7c9-b12f12de4aa5", 00:18:08.240 "assigned_rate_limits": { 00:18:08.240 "rw_ios_per_sec": 0, 00:18:08.240 "rw_mbytes_per_sec": 0, 00:18:08.240 "r_mbytes_per_sec": 0, 00:18:08.240 "w_mbytes_per_sec": 0 00:18:08.240 }, 00:18:08.240 "claimed": false, 00:18:08.240 "zoned": false, 00:18:08.240 "supported_io_types": { 00:18:08.240 "read": true, 00:18:08.240 "write": true, 00:18:08.240 "unmap": true, 00:18:08.240 "flush": false, 00:18:08.240 "reset": true, 00:18:08.240 "nvme_admin": false, 00:18:08.240 "nvme_io": false, 00:18:08.240 "nvme_io_md": false, 00:18:08.240 "write_zeroes": true, 00:18:08.240 "zcopy": false, 00:18:08.240 "get_zone_info": false, 00:18:08.240 "zone_management": false, 00:18:08.240 "zone_append": false, 00:18:08.240 "compare": false, 00:18:08.241 "compare_and_write": false, 00:18:08.241 "abort": false, 00:18:08.241 "seek_hole": true, 00:18:08.241 "seek_data": true, 00:18:08.241 "copy": false, 00:18:08.241 "nvme_iov_md": false 00:18:08.241 }, 00:18:08.241 "driver_specific": { 00:18:08.241 "lvol": { 00:18:08.241 "lvol_store_uuid": "1baf0f9c-2b05-4ccd-837f-9173fdf88701", 00:18:08.241 "base_bdev": "aio_bdev", 00:18:08.241 "thin_provision": false, 00:18:08.241 "num_allocated_clusters": 38, 00:18:08.241 "snapshot": false, 00:18:08.241 "clone": false, 00:18:08.241 "esnap_clone": false 00:18:08.241 } 00:18:08.241 } 00:18:08.241 } 00:18:08.241 ] 00:18:08.241 13:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:08.241 13:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1baf0f9c-2b05-4ccd-837f-9173fdf88701 00:18:08.241 13:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:08.497 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:08.497 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1baf0f9c-2b05-4ccd-837f-9173fdf88701 00:18:08.497 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:08.754 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:08.754 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:09.011 [2024-07-13 13:29:43.577468] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:09.011 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1baf0f9c-2b05-4ccd-837f-9173fdf88701 00:18:09.011 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:09.012 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1baf0f9c-2b05-4ccd-837f-9173fdf88701 00:18:09.012 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.012 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.012 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.012 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.012 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.012 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.012 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.012 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:09.012 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1baf0f9c-2b05-4ccd-837f-9173fdf88701 00:18:09.269 request: 00:18:09.269 { 00:18:09.269 "uuid": "1baf0f9c-2b05-4ccd-837f-9173fdf88701", 00:18:09.269 "method": "bdev_lvol_get_lvstores", 00:18:09.269 "req_id": 1 00:18:09.269 } 00:18:09.269 Got JSON-RPC error response 00:18:09.269 response: 00:18:09.269 { 00:18:09.269 "code": -19, 00:18:09.269 "message": "No such device" 00:18:09.269 } 00:18:09.269 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:09.269 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:09.269 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:09.269 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:09.269 13:29:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:09.526 aio_bdev 00:18:09.526 13:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 692f12f3-a65c-49c9-b7c9-b12f12de4aa5 00:18:09.526 13:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=692f12f3-a65c-49c9-b7c9-b12f12de4aa5 00:18:09.526 13:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:09.526 13:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:09.526 13:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:09.526 13:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:09.526 13:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:09.782 13:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 692f12f3-a65c-49c9-b7c9-b12f12de4aa5 -t 2000 00:18:10.039 [ 00:18:10.039 { 00:18:10.039 "name": "692f12f3-a65c-49c9-b7c9-b12f12de4aa5", 00:18:10.039 "aliases": [ 00:18:10.039 "lvs/lvol" 00:18:10.039 ], 00:18:10.039 "product_name": "Logical Volume", 00:18:10.039 "block_size": 4096, 00:18:10.039 "num_blocks": 38912, 00:18:10.039 "uuid": "692f12f3-a65c-49c9-b7c9-b12f12de4aa5", 00:18:10.039 "assigned_rate_limits": { 00:18:10.039 "rw_ios_per_sec": 0, 00:18:10.039 "rw_mbytes_per_sec": 0, 00:18:10.039 "r_mbytes_per_sec": 0, 00:18:10.039 "w_mbytes_per_sec": 0 00:18:10.039 }, 00:18:10.039 "claimed": false, 00:18:10.039 "zoned": false, 00:18:10.039 "supported_io_types": { 00:18:10.039 "read": true, 00:18:10.039 "write": true, 00:18:10.039 "unmap": true, 00:18:10.039 "flush": false, 00:18:10.039 "reset": true, 00:18:10.039 "nvme_admin": false, 00:18:10.039 "nvme_io": false, 00:18:10.039 "nvme_io_md": false, 00:18:10.039 "write_zeroes": true, 00:18:10.039 "zcopy": false, 00:18:10.039 "get_zone_info": false, 00:18:10.039 "zone_management": false, 00:18:10.040 "zone_append": false, 00:18:10.040 "compare": false, 00:18:10.040 "compare_and_write": false, 00:18:10.040 "abort": false, 00:18:10.040 "seek_hole": true, 00:18:10.040 "seek_data": true, 00:18:10.040 "copy": false, 00:18:10.040 "nvme_iov_md": false 00:18:10.040 }, 00:18:10.040 "driver_specific": { 00:18:10.040 "lvol": { 00:18:10.040 "lvol_store_uuid": "1baf0f9c-2b05-4ccd-837f-9173fdf88701", 00:18:10.040 "base_bdev": "aio_bdev", 00:18:10.040 "thin_provision": false, 00:18:10.040 "num_allocated_clusters": 38, 00:18:10.040 "snapshot": false, 00:18:10.040 "clone": false, 00:18:10.040 "esnap_clone": false 00:18:10.040 } 00:18:10.040 } 00:18:10.040 } 00:18:10.040 ] 00:18:10.040 13:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:10.040 13:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1baf0f9c-2b05-4ccd-837f-9173fdf88701 00:18:10.040 13:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:10.297 13:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:10.297 13:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1baf0f9c-2b05-4ccd-837f-9173fdf88701 00:18:10.297 13:29:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:10.554 13:29:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:10.554 13:29:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 692f12f3-a65c-49c9-b7c9-b12f12de4aa5 00:18:10.811 13:29:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1baf0f9c-2b05-4ccd-837f-9173fdf88701 00:18:11.084 13:29:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:11.342 13:29:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:11.342 00:18:11.342 real 0m21.644s 00:18:11.342 user 0m54.206s 00:18:11.342 sys 0m4.794s 00:18:11.342 13:29:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:11.342 13:29:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:11.342 ************************************ 00:18:11.342 END TEST lvs_grow_dirty 00:18:11.342 ************************************ 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:11.342 nvmf_trace.0 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:11.342 13:29:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:11.342 rmmod nvme_tcp 00:18:11.342 rmmod nvme_fabrics 00:18:11.342 rmmod nvme_keyring 00:18:11.599 13:29:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:11.599 13:29:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:11.599 13:29:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:11.599 13:29:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 267035 ']' 00:18:11.599 13:29:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 267035 00:18:11.599 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 267035 ']' 00:18:11.600 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 267035 00:18:11.600 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:18:11.600 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:11.600 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 267035 00:18:11.600 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:11.600 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:11.600 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 267035' 00:18:11.600 killing process with pid 267035 00:18:11.600 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 267035 00:18:11.600 13:29:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 267035 00:18:13.009 13:29:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:13.009 13:29:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:13.009 13:29:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:13.009 13:29:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.009 13:29:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:13.009 13:29:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.009 13:29:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.009 13:29:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.913 13:29:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:14.913 00:18:14.913 real 0m47.534s 00:18:14.913 user 1m20.491s 00:18:14.913 sys 0m8.737s 00:18:14.913 13:29:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:14.913 13:29:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:14.913 ************************************ 00:18:14.913 END TEST nvmf_lvs_grow 00:18:14.913 ************************************ 00:18:14.913 13:29:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:14.913 13:29:49 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:14.913 13:29:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:14.913 13:29:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.913 13:29:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:14.913 ************************************ 00:18:14.913 START TEST nvmf_bdev_io_wait 00:18:14.913 ************************************ 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:14.913 * Looking for test storage... 00:18:14.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.913 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:14.914 13:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.814 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:16.815 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:16.815 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:16.815 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:16.815 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.815 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:17.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:18:17.074 00:18:17.074 --- 10.0.0.2 ping statistics --- 00:18:17.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.074 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:18:17.074 00:18:17.074 --- 10.0.0.1 ping statistics --- 00:18:17.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.074 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=269698 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 269698 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 269698 ']' 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.074 13:29:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:17.074 [2024-07-13 13:29:51.765101] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:17.074 [2024-07-13 13:29:51.765248] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.332 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.332 [2024-07-13 13:29:51.909022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.589 [2024-07-13 13:29:52.172672] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.589 [2024-07-13 13:29:52.172755] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.589 [2024-07-13 13:29:52.172784] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.589 [2024-07-13 13:29:52.172805] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.589 [2024-07-13 13:29:52.172827] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.589 [2024-07-13 13:29:52.172964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.589 [2024-07-13 13:29:52.173038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.589 [2024-07-13 13:29:52.173133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.589 [2024-07-13 13:29:52.173144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:18.154 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.154 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:18:18.154 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.154 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:18.154 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.154 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.154 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:18.154 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.154 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.154 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.154 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:18.154 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.154 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.411 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.411 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:18.411 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.411 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.411 [2024-07-13 13:29:52.945349] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.411 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.411 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:18.411 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.411 13:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.411 Malloc0 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:18.411 [2024-07-13 13:29:53.060763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=269970 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=269972 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:18.411 { 00:18:18.411 "params": { 00:18:18.411 "name": "Nvme$subsystem", 00:18:18.411 "trtype": "$TEST_TRANSPORT", 00:18:18.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:18.411 "adrfam": "ipv4", 00:18:18.411 "trsvcid": "$NVMF_PORT", 00:18:18.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:18.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:18.411 "hdgst": ${hdgst:-false}, 00:18:18.411 "ddgst": ${ddgst:-false} 00:18:18.411 }, 00:18:18.411 "method": "bdev_nvme_attach_controller" 00:18:18.411 } 00:18:18.411 EOF 00:18:18.411 )") 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=269974 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:18.411 { 00:18:18.411 "params": { 00:18:18.411 "name": "Nvme$subsystem", 00:18:18.411 "trtype": "$TEST_TRANSPORT", 00:18:18.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:18.411 "adrfam": "ipv4", 00:18:18.411 "trsvcid": "$NVMF_PORT", 00:18:18.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:18.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:18.411 "hdgst": ${hdgst:-false}, 00:18:18.411 "ddgst": ${ddgst:-false} 00:18:18.411 }, 00:18:18.411 "method": "bdev_nvme_attach_controller" 00:18:18.411 } 00:18:18.411 EOF 00:18:18.411 )") 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=269977 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:18.411 { 00:18:18.411 "params": { 00:18:18.411 "name": "Nvme$subsystem", 00:18:18.411 "trtype": "$TEST_TRANSPORT", 00:18:18.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:18.411 "adrfam": "ipv4", 00:18:18.411 "trsvcid": "$NVMF_PORT", 00:18:18.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:18.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:18.411 "hdgst": ${hdgst:-false}, 00:18:18.411 "ddgst": ${ddgst:-false} 00:18:18.411 }, 00:18:18.411 "method": "bdev_nvme_attach_controller" 00:18:18.411 } 00:18:18.411 EOF 00:18:18.411 )") 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:18.411 { 00:18:18.411 "params": { 00:18:18.411 "name": "Nvme$subsystem", 00:18:18.411 "trtype": "$TEST_TRANSPORT", 00:18:18.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:18.411 "adrfam": "ipv4", 00:18:18.411 "trsvcid": "$NVMF_PORT", 00:18:18.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:18.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:18.411 "hdgst": ${hdgst:-false}, 00:18:18.411 "ddgst": ${ddgst:-false} 00:18:18.411 }, 00:18:18.411 "method": "bdev_nvme_attach_controller" 00:18:18.411 } 00:18:18.411 EOF 00:18:18.411 )") 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 269970 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:18.411 "params": { 00:18:18.411 "name": "Nvme1", 00:18:18.411 "trtype": "tcp", 00:18:18.411 "traddr": "10.0.0.2", 00:18:18.411 "adrfam": "ipv4", 00:18:18.411 "trsvcid": "4420", 00:18:18.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.411 "hdgst": false, 00:18:18.411 "ddgst": false 00:18:18.411 }, 00:18:18.411 "method": "bdev_nvme_attach_controller" 00:18:18.411 }' 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:18.411 "params": { 00:18:18.411 "name": "Nvme1", 00:18:18.411 "trtype": "tcp", 00:18:18.411 "traddr": "10.0.0.2", 00:18:18.411 "adrfam": "ipv4", 00:18:18.411 "trsvcid": "4420", 00:18:18.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.411 "hdgst": false, 00:18:18.411 "ddgst": false 00:18:18.411 }, 00:18:18.411 "method": "bdev_nvme_attach_controller" 00:18:18.411 }' 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:18.411 "params": { 00:18:18.411 "name": "Nvme1", 00:18:18.411 "trtype": "tcp", 00:18:18.411 "traddr": "10.0.0.2", 00:18:18.411 "adrfam": "ipv4", 00:18:18.411 "trsvcid": "4420", 00:18:18.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.411 "hdgst": false, 00:18:18.411 "ddgst": false 00:18:18.411 }, 00:18:18.411 "method": "bdev_nvme_attach_controller" 00:18:18.411 }' 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:18.411 13:29:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:18.411 "params": { 00:18:18.411 "name": "Nvme1", 00:18:18.411 "trtype": "tcp", 00:18:18.411 "traddr": "10.0.0.2", 00:18:18.411 "adrfam": "ipv4", 00:18:18.411 "trsvcid": "4420", 00:18:18.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.411 "hdgst": false, 00:18:18.411 "ddgst": false 00:18:18.411 }, 00:18:18.411 "method": "bdev_nvme_attach_controller" 00:18:18.411 }' 00:18:18.411 [2024-07-13 13:29:53.145250] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:18.411 [2024-07-13 13:29:53.145247] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:18.411 [2024-07-13 13:29:53.145395] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:18.411 [2024-07-13 13:29:53.145422] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:18.412 [2024-07-13 13:29:53.146589] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:18.412 [2024-07-13 13:29:53.146579] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:18.412 [2024-07-13 13:29:53.146723] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-13 13:29:53.146724] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:18.412 --proc-type=auto ] 00:18:18.668 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.668 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.668 [2024-07-13 13:29:53.384746] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.926 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.926 [2024-07-13 13:29:53.492529] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.926 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.926 [2024-07-13 13:29:53.568451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.926 [2024-07-13 13:29:53.608522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:18.926 [2024-07-13 13:29:53.640693] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.183 [2024-07-13 13:29:53.718341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:19.183 [2024-07-13 13:29:53.785048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:19.183 [2024-07-13 13:29:53.857084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:18:19.440 Running I/O for 1 seconds... 00:18:19.440 Running I/O for 1 seconds... 00:18:19.697 Running I/O for 1 seconds... 00:18:19.697 Running I/O for 1 seconds... 00:18:20.631 00:18:20.631 Latency(us) 00:18:20.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.631 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:20.631 Nvme1n1 : 1.01 5129.85 20.04 0.00 0.00 24784.00 10097.40 35340.89 00:18:20.631 =================================================================================================================== 00:18:20.631 Total : 5129.85 20.04 0.00 0.00 24784.00 10097.40 35340.89 00:18:20.631 00:18:20.631 Latency(us) 00:18:20.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.631 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:20.631 Nvme1n1 : 1.02 5439.20 21.25 0.00 0.00 23351.45 9029.40 32234.00 00:18:20.631 =================================================================================================================== 00:18:20.631 Total : 5439.20 21.25 0.00 0.00 23351.45 9029.40 32234.00 00:18:20.889 00:18:20.889 Latency(us) 00:18:20.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.889 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:20.889 Nvme1n1 : 1.00 157047.14 613.47 0.00 0.00 812.11 336.78 1128.68 00:18:20.889 =================================================================================================================== 00:18:20.889 Total : 157047.14 613.47 0.00 0.00 812.11 336.78 1128.68 00:18:20.889 00:18:20.889 Latency(us) 00:18:20.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.889 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:20.889 Nvme1n1 : 1.01 7481.48 29.22 0.00 0.00 17014.40 3665.16 25631.86 00:18:20.889 =================================================================================================================== 00:18:20.889 Total : 7481.48 29.22 0.00 0.00 17014.40 3665.16 25631.86 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 269972 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 269974 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 269977 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:21.824 rmmod nvme_tcp 00:18:21.824 rmmod nvme_fabrics 00:18:21.824 rmmod nvme_keyring 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 269698 ']' 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 269698 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 269698 ']' 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 269698 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 269698 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 269698' 00:18:21.824 killing process with pid 269698 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 269698 00:18:21.824 13:29:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 269698 00:18:23.201 13:29:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:23.201 13:29:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:23.201 13:29:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:23.201 13:29:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.201 13:29:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:23.201 13:29:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.201 13:29:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.201 13:29:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.108 13:29:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:25.108 00:18:25.108 real 0m10.331s 00:18:25.108 user 0m30.729s 00:18:25.108 sys 0m4.247s 00:18:25.108 13:29:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:25.108 13:29:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:25.108 ************************************ 00:18:25.108 END TEST nvmf_bdev_io_wait 00:18:25.108 ************************************ 00:18:25.108 13:29:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:25.108 13:29:59 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:25.108 13:29:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:25.108 13:29:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.108 13:29:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:25.108 ************************************ 00:18:25.108 START TEST nvmf_queue_depth 00:18:25.108 ************************************ 00:18:25.108 13:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:25.367 * Looking for test storage... 00:18:25.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:25.367 13:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:27.269 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:27.270 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:27.270 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:27.270 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:27.270 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:27.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:18:27.270 00:18:27.270 --- 10.0.0.2 ping statistics --- 00:18:27.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.270 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:18:27.270 00:18:27.270 --- 10.0.0.1 ping statistics --- 00:18:27.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.270 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=272448 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 272448 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 272448 ']' 00:18:27.270 13:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.271 13:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.271 13:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.271 13:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.271 13:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:27.271 [2024-07-13 13:30:01.955185] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:27.271 [2024-07-13 13:30:01.955314] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.528 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.528 [2024-07-13 13:30:02.097121] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.786 [2024-07-13 13:30:02.352980] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.786 [2024-07-13 13:30:02.353055] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.787 [2024-07-13 13:30:02.353077] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.787 [2024-07-13 13:30:02.353096] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.787 [2024-07-13 13:30:02.353113] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.787 [2024-07-13 13:30:02.353161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.372 [2024-07-13 13:30:02.884365] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.372 Malloc0 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.372 13:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.372 [2024-07-13 13:30:02.999925] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.372 13:30:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.372 13:30:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=272608 00:18:28.372 13:30:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:28.372 13:30:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:28.372 13:30:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 272608 /var/tmp/bdevperf.sock 00:18:28.372 13:30:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 272608 ']' 00:18:28.372 13:30:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.372 13:30:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.372 13:30:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.372 13:30:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.372 13:30:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.372 [2024-07-13 13:30:03.080092] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:28.372 [2024-07-13 13:30:03.080261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid272608 ] 00:18:28.630 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.630 [2024-07-13 13:30:03.212196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.888 [2024-07-13 13:30:03.467957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.453 13:30:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.453 13:30:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:29.453 13:30:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:29.453 13:30:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.453 13:30:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:29.453 NVMe0n1 00:18:29.453 13:30:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.453 13:30:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:29.711 Running I/O for 10 seconds... 00:18:39.683 00:18:39.683 Latency(us) 00:18:39.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.683 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:39.683 Verification LBA range: start 0x0 length 0x4000 00:18:39.683 NVMe0n1 : 10.14 5197.56 20.30 0.00 0.00 195481.08 36311.80 116508.44 00:18:39.683 =================================================================================================================== 00:18:39.683 Total : 5197.56 20.30 0.00 0.00 195481.08 36311.80 116508.44 00:18:39.683 0 00:18:39.683 13:30:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 272608 00:18:39.683 13:30:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 272608 ']' 00:18:39.683 13:30:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 272608 00:18:39.683 13:30:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:39.683 13:30:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:39.683 13:30:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 272608 00:18:39.941 13:30:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:39.941 13:30:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:39.941 13:30:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 272608' 00:18:39.941 killing process with pid 272608 00:18:39.941 13:30:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 272608 00:18:39.941 Received shutdown signal, test time was about 10.000000 seconds 00:18:39.941 00:18:39.941 Latency(us) 00:18:39.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.941 =================================================================================================================== 00:18:39.941 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.941 13:30:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 272608 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:40.894 rmmod nvme_tcp 00:18:40.894 rmmod nvme_fabrics 00:18:40.894 rmmod nvme_keyring 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 272448 ']' 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 272448 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 272448 ']' 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 272448 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 272448 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 272448' 00:18:40.894 killing process with pid 272448 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 272448 00:18:40.894 13:30:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 272448 00:18:42.791 13:30:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:42.791 13:30:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:42.791 13:30:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:42.791 13:30:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.791 13:30:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:42.791 13:30:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.791 13:30:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.791 13:30:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.694 13:30:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:44.694 00:18:44.694 real 0m19.310s 00:18:44.694 user 0m24.886s 00:18:44.694 sys 0m4.334s 00:18:44.694 13:30:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:44.694 13:30:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:44.694 ************************************ 00:18:44.694 END TEST nvmf_queue_depth 00:18:44.694 ************************************ 00:18:44.694 13:30:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:44.694 13:30:19 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:44.694 13:30:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:44.694 13:30:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:44.694 13:30:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:44.694 ************************************ 00:18:44.694 START TEST nvmf_target_multipath 00:18:44.694 ************************************ 00:18:44.694 13:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:44.694 * Looking for test storage... 00:18:44.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:44.694 13:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.694 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:44.694 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.694 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.694 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.694 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.694 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:44.695 13:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:46.596 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:46.597 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:46.597 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:46.597 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:46.597 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:46.597 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:46.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:18:46.856 00:18:46.856 --- 10.0.0.2 ping statistics --- 00:18:46.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.856 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:46.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:18:46.856 00:18:46.856 --- 10.0.0.1 ping statistics --- 00:18:46.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.856 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:46.856 only one NIC for nvmf test 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:46.856 rmmod nvme_tcp 00:18:46.856 rmmod nvme_fabrics 00:18:46.856 rmmod nvme_keyring 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:46.856 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:46.857 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:46.857 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:46.857 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:46.857 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:46.857 13:30:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.857 13:30:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.857 13:30:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.758 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:48.758 13:30:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:48.758 13:30:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:49.016 00:18:49.016 real 0m4.334s 00:18:49.016 user 0m0.802s 00:18:49.016 sys 0m1.523s 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:49.016 13:30:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:49.016 ************************************ 00:18:49.016 END TEST nvmf_target_multipath 00:18:49.016 ************************************ 00:18:49.016 13:30:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:49.016 13:30:23 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:49.016 13:30:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:49.016 13:30:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.016 13:30:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:49.016 ************************************ 00:18:49.016 START TEST nvmf_zcopy 00:18:49.016 ************************************ 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:49.016 * Looking for test storage... 00:18:49.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.016 13:30:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:49.017 13:30:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:50.916 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:50.916 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:50.916 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:50.916 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:50.916 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:51.178 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:51.178 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:51.178 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:51.178 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:51.178 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:51.178 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:51.178 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:51.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:18:51.178 00:18:51.178 --- 10.0.0.2 ping statistics --- 00:18:51.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.178 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:18:51.178 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:18:51.178 00:18:51.178 --- 10.0.0.1 ping statistics --- 00:18:51.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.179 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=278547 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 278547 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 278547 ']' 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.179 13:30:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:51.179 [2024-07-13 13:30:25.854944] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:51.179 [2024-07-13 13:30:25.855088] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.451 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.451 [2024-07-13 13:30:26.002376] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.708 [2024-07-13 13:30:26.258962] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.708 [2024-07-13 13:30:26.259043] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.708 [2024-07-13 13:30:26.259072] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.708 [2024-07-13 13:30:26.259111] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.708 [2024-07-13 13:30:26.259133] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.708 [2024-07-13 13:30:26.259189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.273 [2024-07-13 13:30:26.803916] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.273 [2024-07-13 13:30:26.820179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.273 malloc0 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:52.273 { 00:18:52.273 "params": { 00:18:52.273 "name": "Nvme$subsystem", 00:18:52.273 "trtype": "$TEST_TRANSPORT", 00:18:52.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.273 "adrfam": "ipv4", 00:18:52.273 "trsvcid": "$NVMF_PORT", 00:18:52.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.273 "hdgst": ${hdgst:-false}, 00:18:52.273 "ddgst": ${ddgst:-false} 00:18:52.273 }, 00:18:52.273 "method": "bdev_nvme_attach_controller" 00:18:52.273 } 00:18:52.273 EOF 00:18:52.273 )") 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:52.273 13:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:52.273 "params": { 00:18:52.273 "name": "Nvme1", 00:18:52.273 "trtype": "tcp", 00:18:52.273 "traddr": "10.0.0.2", 00:18:52.273 "adrfam": "ipv4", 00:18:52.273 "trsvcid": "4420", 00:18:52.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.274 "hdgst": false, 00:18:52.274 "ddgst": false 00:18:52.274 }, 00:18:52.274 "method": "bdev_nvme_attach_controller" 00:18:52.274 }' 00:18:52.274 [2024-07-13 13:30:26.977840] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:52.274 [2024-07-13 13:30:26.978012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278697 ] 00:18:52.530 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.530 [2024-07-13 13:30:27.115283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.787 [2024-07-13 13:30:27.369670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.353 Running I/O for 10 seconds... 00:19:03.323 00:19:03.323 Latency(us) 00:19:03.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.323 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:03.323 Verification LBA range: start 0x0 length 0x1000 00:19:03.323 Nvme1n1 : 10.02 4295.95 33.56 0.00 0.00 29712.67 904.15 38641.97 00:19:03.323 =================================================================================================================== 00:19:03.323 Total : 4295.95 33.56 0.00 0.00 29712.67 904.15 38641.97 00:19:04.256 13:30:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=280139 00:19:04.256 13:30:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:19:04.256 13:30:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:04.256 13:30:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:04.256 13:30:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:04.256 13:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:04.256 13:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:04.256 13:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:04.256 13:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:04.256 { 00:19:04.256 "params": { 00:19:04.256 "name": "Nvme$subsystem", 00:19:04.256 "trtype": "$TEST_TRANSPORT", 00:19:04.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.256 "adrfam": "ipv4", 00:19:04.256 "trsvcid": "$NVMF_PORT", 00:19:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.256 "hdgst": ${hdgst:-false}, 00:19:04.256 "ddgst": ${ddgst:-false} 00:19:04.256 }, 00:19:04.256 "method": "bdev_nvme_attach_controller" 00:19:04.256 } 00:19:04.256 EOF 00:19:04.256 )") 00:19:04.256 13:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:04.256 [2024-07-13 13:30:38.979073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.256 [2024-07-13 13:30:38.979141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.256 13:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:04.256 13:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:04.256 13:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:04.256 "params": { 00:19:04.256 "name": "Nvme1", 00:19:04.256 "trtype": "tcp", 00:19:04.256 "traddr": "10.0.0.2", 00:19:04.256 "adrfam": "ipv4", 00:19:04.256 "trsvcid": "4420", 00:19:04.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.256 "hdgst": false, 00:19:04.256 "ddgst": false 00:19:04.256 }, 00:19:04.256 "method": "bdev_nvme_attach_controller" 00:19:04.256 }' 00:19:04.256 [2024-07-13 13:30:38.986967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.256 [2024-07-13 13:30:38.987010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.256 [2024-07-13 13:30:38.995005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.256 [2024-07-13 13:30:38.995035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.514 [2024-07-13 13:30:39.003024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.514 [2024-07-13 13:30:39.003058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.514 [2024-07-13 13:30:39.011019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.514 [2024-07-13 13:30:39.011055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.514 [2024-07-13 13:30:39.019052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.514 [2024-07-13 13:30:39.019085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.514 [2024-07-13 13:30:39.027069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.514 [2024-07-13 13:30:39.027098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.514 [2024-07-13 13:30:39.035079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.514 [2024-07-13 13:30:39.035108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.514 [2024-07-13 13:30:39.043159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.514 [2024-07-13 13:30:39.043188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.514 [2024-07-13 13:30:39.051125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.514 [2024-07-13 13:30:39.051170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.514 [2024-07-13 13:30:39.054657] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:04.514 [2024-07-13 13:30:39.054780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280139 ] 00:19:04.514 [2024-07-13 13:30:39.059180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.514 [2024-07-13 13:30:39.059208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.514 [2024-07-13 13:30:39.067196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.514 [2024-07-13 13:30:39.067238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.514 [2024-07-13 13:30:39.075202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.514 [2024-07-13 13:30:39.075228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.514 [2024-07-13 13:30:39.083257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.514 [2024-07-13 13:30:39.083284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.514 [2024-07-13 13:30:39.091279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.514 [2024-07-13 13:30:39.091306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.514 [2024-07-13 13:30:39.099284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.099312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.107316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.107342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.115318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.115345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.123347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.123379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.131370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.131396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.515 [2024-07-13 13:30:39.139431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.139465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.147448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.147481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.155467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.155511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.163473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.163505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.171517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.171549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.179518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.179551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.187565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.187598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.192983] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.515 [2024-07-13 13:30:39.195578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.195611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.203626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.203670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.211697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.211747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.219653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.219685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.227657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.227689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.235717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.235749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.243701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.243733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.515 [2024-07-13 13:30:39.251743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.515 [2024-07-13 13:30:39.251775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.773 [2024-07-13 13:30:39.259784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.773 [2024-07-13 13:30:39.259817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.773 [2024-07-13 13:30:39.267781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.773 [2024-07-13 13:30:39.267815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.773 [2024-07-13 13:30:39.275817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.773 [2024-07-13 13:30:39.275853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.283842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.283890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.291844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.291886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.299891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.299943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.307893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.307947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.315943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.315973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.323968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.323997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.332054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.332099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.340014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.340046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.348023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.348051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.356024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.356053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.364049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.364077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.372056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.372083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.380090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.380117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.388134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.388179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.396117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.396161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.404178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.404206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.412205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.412238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.420216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.420255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.428280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.428313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.436274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.436307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.444308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.444340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.448173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.774 [2024-07-13 13:30:39.452323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.452355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.460345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.460378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.468453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.468506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.476439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.476484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.484400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.484432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.492460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.492494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.500474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.500506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.508487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.508520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:04.774 [2024-07-13 13:30:39.516509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:04.774 [2024-07-13 13:30:39.516543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.524538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.524572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.532561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.532596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.540644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.540697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.548642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.548695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.556700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.556753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.564679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.564731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.572682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.572716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.580696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.580729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.588703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.588737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.596737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.596769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.604759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.604792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.612768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.612800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.620829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.620861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.628815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.628847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.636855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.636897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.644896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.644945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.652917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.652945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.660948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.660976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.668961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.668998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.676969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.676996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.685010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.685040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.693063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.693110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.701130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.701198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.709091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.709121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.717086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.717113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.725120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.725165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.733128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.733174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.741134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.741181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.749215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.749247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.757200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.757233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.765242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.765274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.033 [2024-07-13 13:30:39.773270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.033 [2024-07-13 13:30:39.773303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.781287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.781320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.789314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.789347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.797345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.797380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.805431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.805477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.813474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.813522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.821466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.821502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.829518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.829555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.837537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.837573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.845536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.845570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.853557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.853586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.861609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.861643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.869615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.869647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.877666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.877703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.885662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.885699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.893713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.893760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.901732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.901767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.909769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.909808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.917785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.917821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 Running I/O for 5 seconds... 00:19:05.292 [2024-07-13 13:30:39.925876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.925927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.945607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.945643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.961049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.961084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.976844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.976903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:39.992523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:39.992563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:40.010064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:40.010119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.292 [2024-07-13 13:30:40.026507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.292 [2024-07-13 13:30:40.026549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.550 [2024-07-13 13:30:40.043780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.550 [2024-07-13 13:30:40.043825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.550 [2024-07-13 13:30:40.059806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.550 [2024-07-13 13:30:40.059847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.550 [2024-07-13 13:30:40.076091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.550 [2024-07-13 13:30:40.076127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.550 [2024-07-13 13:30:40.091663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.550 [2024-07-13 13:30:40.091698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.550 [2024-07-13 13:30:40.106741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.550 [2024-07-13 13:30:40.106775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.550 [2024-07-13 13:30:40.122346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.550 [2024-07-13 13:30:40.122388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.550 [2024-07-13 13:30:40.136618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.550 [2024-07-13 13:30:40.136652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.550 [2024-07-13 13:30:40.150893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.550 [2024-07-13 13:30:40.150926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.550 [2024-07-13 13:30:40.165247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.550 [2024-07-13 13:30:40.165281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.551 [2024-07-13 13:30:40.180143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.551 [2024-07-13 13:30:40.180176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.551 [2024-07-13 13:30:40.194830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.551 [2024-07-13 13:30:40.194863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.551 [2024-07-13 13:30:40.209297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.551 [2024-07-13 13:30:40.209331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.551 [2024-07-13 13:30:40.224378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.551 [2024-07-13 13:30:40.224412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.551 [2024-07-13 13:30:40.239288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.551 [2024-07-13 13:30:40.239322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.551 [2024-07-13 13:30:40.254098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.551 [2024-07-13 13:30:40.254131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.551 [2024-07-13 13:30:40.269499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.551 [2024-07-13 13:30:40.269534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.551 [2024-07-13 13:30:40.285069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.551 [2024-07-13 13:30:40.285103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.300139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.300174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.315278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.315327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.330961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.330995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.345356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.345405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.360413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.360446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.375332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.375381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.390182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.390216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.405448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.405488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.418463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.418496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.433200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.433234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.449049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.449082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.464791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.464824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.479673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.479706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.495519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.495552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.510221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.510255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.524312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.524347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:05.809 [2024-07-13 13:30:40.539477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:05.809 [2024-07-13 13:30:40.539511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.555039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.555075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.570436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.570470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.584887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.584920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.599195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.599231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.613808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.613842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.628688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.628723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.643406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.643440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.658533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.658568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.673314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.673362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.688570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.688611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.703643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.703677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.718208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.718243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.733465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.733499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.749008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.749042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.765242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.765276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.780671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.780707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.068 [2024-07-13 13:30:40.797468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.068 [2024-07-13 13:30:40.797501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:40.814940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:40.814977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:40.831042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:40.831076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:40.846148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:40.846182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:40.861840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:40.861883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:40.878023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:40.878058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:40.893482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:40.893522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:40.908155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:40.908195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:40.923809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:40.923842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:40.939281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:40.939322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:40.954983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:40.955018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:40.969958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:40.969992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:40.985007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:40.985042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:41.000711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:41.000745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:41.016344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:41.016378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:41.031778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:41.031811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:41.048103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:41.048164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.326 [2024-07-13 13:30:41.063236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.326 [2024-07-13 13:30:41.063270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.584 [2024-07-13 13:30:41.079583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.584 [2024-07-13 13:30:41.079617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.584 [2024-07-13 13:30:41.095451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.584 [2024-07-13 13:30:41.095501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.584 [2024-07-13 13:30:41.110748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.584 [2024-07-13 13:30:41.110798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.584 [2024-07-13 13:30:41.126296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.584 [2024-07-13 13:30:41.126351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.584 [2024-07-13 13:30:41.139290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.584 [2024-07-13 13:30:41.139324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.584 [2024-07-13 13:30:41.154630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.584 [2024-07-13 13:30:41.154664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.584 [2024-07-13 13:30:41.169657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.584 [2024-07-13 13:30:41.169691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.584 [2024-07-13 13:30:41.184921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.584 [2024-07-13 13:30:41.184958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.584 [2024-07-13 13:30:41.200462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.584 [2024-07-13 13:30:41.200511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.584 [2024-07-13 13:30:41.213465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.584 [2024-07-13 13:30:41.213499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.584 [2024-07-13 13:30:41.228771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.584 [2024-07-13 13:30:41.228805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.584 [2024-07-13 13:30:41.243835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.584 [2024-07-13 13:30:41.243876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.584 [2024-07-13 13:30:41.259301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.584 [2024-07-13 13:30:41.259335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.584 [2024-07-13 13:30:41.272787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.584 [2024-07-13 13:30:41.272821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.585 [2024-07-13 13:30:41.288252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.585 [2024-07-13 13:30:41.288301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.585 [2024-07-13 13:30:41.303412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.585 [2024-07-13 13:30:41.303446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.585 [2024-07-13 13:30:41.319256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.585 [2024-07-13 13:30:41.319291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.844 [2024-07-13 13:30:41.332685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.844 [2024-07-13 13:30:41.332719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.845 [2024-07-13 13:30:41.347457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.845 [2024-07-13 13:30:41.347491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.845 [2024-07-13 13:30:41.362553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.845 [2024-07-13 13:30:41.362586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.845 [2024-07-13 13:30:41.377761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.845 [2024-07-13 13:30:41.377794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.845 [2024-07-13 13:30:41.393077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.845 [2024-07-13 13:30:41.393113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.845 [2024-07-13 13:30:41.407886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.845 [2024-07-13 13:30:41.407920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.845 [2024-07-13 13:30:41.423692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.845 [2024-07-13 13:30:41.423725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.845 [2024-07-13 13:30:41.439126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.845 [2024-07-13 13:30:41.439160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.845 [2024-07-13 13:30:41.454381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.845 [2024-07-13 13:30:41.454414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.845 [2024-07-13 13:30:41.469827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.845 [2024-07-13 13:30:41.469886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.845 [2024-07-13 13:30:41.485198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.845 [2024-07-13 13:30:41.485232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.845 [2024-07-13 13:30:41.499982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.845 [2024-07-13 13:30:41.500017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.845 [2024-07-13 13:30:41.515401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.845 [2024-07-13 13:30:41.515451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.845 [2024-07-13 13:30:41.531380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.845 [2024-07-13 13:30:41.531430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.845 [2024-07-13 13:30:41.547128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.845 [2024-07-13 13:30:41.547164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.845 [2024-07-13 13:30:41.562949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.845 [2024-07-13 13:30:41.562983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.845 [2024-07-13 13:30:41.579279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.845 [2024-07-13 13:30:41.579313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.594950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.594984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.610269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.610303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.626622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.626655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.641691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.641725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.657302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.657336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.672589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.672622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.687689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.687722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.703250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.703303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.718627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.718661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.731415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.731448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.745279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.745313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.760970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.761004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.776078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.776114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.791623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.791658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.803791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.803826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.819118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.819155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.104 [2024-07-13 13:30:41.835044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.104 [2024-07-13 13:30:41.835079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:41.850825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:41.850885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:41.866311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:41.866345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:41.879229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:41.879266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:41.894688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:41.894722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:41.910747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:41.910795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:41.925032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:41.925068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:41.940783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:41.940832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:41.956068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:41.956103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:41.971574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:41.971607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:41.987271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:41.987325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:42.002985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:42.003020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:42.016138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:42.016186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:42.031394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:42.031428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:42.046520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:42.046554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:42.062161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:42.062195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:42.077423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:42.077456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.364 [2024-07-13 13:30:42.093205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.364 [2024-07-13 13:30:42.093254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.108811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.108874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.121909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.121951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.137741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.137775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.150951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.150986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.166660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.166695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.183116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.183151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.199430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.199464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.214373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.214423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.230178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.230227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.245595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.245628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.261266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.261301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.276488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.276538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.292246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.292286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.304443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.304478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.318626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.318660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.334673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.334707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.350301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.350335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.623 [2024-07-13 13:30:42.365414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.623 [2024-07-13 13:30:42.365448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.381589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.381622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.397377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.397411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.413037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.413079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.429078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.429115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.444774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.444808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.457551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.457585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.473111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.473161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.488423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.488474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.503616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.503649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.519364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.519416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.531799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.531832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.546124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.546173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.561439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.561492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.576355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.576389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.591641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.591675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.606690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.606724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.882 [2024-07-13 13:30:42.622303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.882 [2024-07-13 13:30:42.622337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.140 [2024-07-13 13:30:42.638461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.140 [2024-07-13 13:30:42.638495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.140 [2024-07-13 13:30:42.653831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.140 [2024-07-13 13:30:42.653893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.140 [2024-07-13 13:30:42.669816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.140 [2024-07-13 13:30:42.669856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.140 [2024-07-13 13:30:42.685541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.140 [2024-07-13 13:30:42.685574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.140 [2024-07-13 13:30:42.701599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.140 [2024-07-13 13:30:42.701640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.140 [2024-07-13 13:30:42.714973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.140 [2024-07-13 13:30:42.715007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.140 [2024-07-13 13:30:42.730411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.141 [2024-07-13 13:30:42.730445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.141 [2024-07-13 13:30:42.745575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.141 [2024-07-13 13:30:42.745608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.141 [2024-07-13 13:30:42.761276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.141 [2024-07-13 13:30:42.761329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.141 [2024-07-13 13:30:42.776514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.141 [2024-07-13 13:30:42.776548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.141 [2024-07-13 13:30:42.792010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.141 [2024-07-13 13:30:42.792044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.141 [2024-07-13 13:30:42.804536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.141 [2024-07-13 13:30:42.804569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.141 [2024-07-13 13:30:42.819732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.141 [2024-07-13 13:30:42.819766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.141 [2024-07-13 13:30:42.835024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.141 [2024-07-13 13:30:42.835059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.141 [2024-07-13 13:30:42.849818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.141 [2024-07-13 13:30:42.849877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.141 [2024-07-13 13:30:42.864614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.141 [2024-07-13 13:30:42.864647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.141 [2024-07-13 13:30:42.879752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.141 [2024-07-13 13:30:42.879786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.399 [2024-07-13 13:30:42.895676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.399 [2024-07-13 13:30:42.895712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.399 [2024-07-13 13:30:42.911364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.399 [2024-07-13 13:30:42.911397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.399 [2024-07-13 13:30:42.928500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.399 [2024-07-13 13:30:42.928534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.399 [2024-07-13 13:30:42.944570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.399 [2024-07-13 13:30:42.944605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.399 [2024-07-13 13:30:42.960836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.399 [2024-07-13 13:30:42.960897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.399 [2024-07-13 13:30:42.973426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.399 [2024-07-13 13:30:42.973463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.399 [2024-07-13 13:30:42.988958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.399 [2024-07-13 13:30:42.989001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.399 [2024-07-13 13:30:43.005358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.399 [2024-07-13 13:30:43.005415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.399 [2024-07-13 13:30:43.021076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.399 [2024-07-13 13:30:43.021112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.399 [2024-07-13 13:30:43.037557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.399 [2024-07-13 13:30:43.037591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.399 [2024-07-13 13:30:43.053360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.399 [2024-07-13 13:30:43.053394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.399 [2024-07-13 13:30:43.068991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.399 [2024-07-13 13:30:43.069026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.399 [2024-07-13 13:30:43.084664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.399 [2024-07-13 13:30:43.084698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.399 [2024-07-13 13:30:43.099638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.399 [2024-07-13 13:30:43.099671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.399 [2024-07-13 13:30:43.115467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.399 [2024-07-13 13:30:43.115501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.399 [2024-07-13 13:30:43.128785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.399 [2024-07-13 13:30:43.128835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.657 [2024-07-13 13:30:43.145193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.657 [2024-07-13 13:30:43.145226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.658 [2024-07-13 13:30:43.161280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.658 [2024-07-13 13:30:43.161313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.658 [2024-07-13 13:30:43.177166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.658 [2024-07-13 13:30:43.177200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.658 [2024-07-13 13:30:43.192792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.658 [2024-07-13 13:30:43.192826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.658 [2024-07-13 13:30:43.208732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.658 [2024-07-13 13:30:43.208779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.658 [2024-07-13 13:30:43.224219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.658 [2024-07-13 13:30:43.224252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.658 [2024-07-13 13:30:43.239670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.658 [2024-07-13 13:30:43.239704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.658 [2024-07-13 13:30:43.254693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.658 [2024-07-13 13:30:43.254726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.658 [2024-07-13 13:30:43.269918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.658 [2024-07-13 13:30:43.269953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.658 [2024-07-13 13:30:43.284949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.658 [2024-07-13 13:30:43.284984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.658 [2024-07-13 13:30:43.299973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.658 [2024-07-13 13:30:43.300008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.658 [2024-07-13 13:30:43.315739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.658 [2024-07-13 13:30:43.315773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.658 [2024-07-13 13:30:43.331146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.658 [2024-07-13 13:30:43.331180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.658 [2024-07-13 13:30:43.346695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.658 [2024-07-13 13:30:43.346729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.658 [2024-07-13 13:30:43.362636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.658 [2024-07-13 13:30:43.362670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.658 [2024-07-13 13:30:43.377997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.658 [2024-07-13 13:30:43.378034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.658 [2024-07-13 13:30:43.392990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.658 [2024-07-13 13:30:43.393025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.409316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.409350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.424096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.424131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.439538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.439572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.455230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.455283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.468768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.468802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.484232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.484267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.499395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.499429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.515100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.515135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.530503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.530552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.545601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.545634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.560894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.560930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.575941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.575976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.591524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.591557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.606595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.606629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.621835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.621893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.636799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.636833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.916 [2024-07-13 13:30:43.652738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.916 [2024-07-13 13:30:43.652788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.174 [2024-07-13 13:30:43.668560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.668593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.175 [2024-07-13 13:30:43.684688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.684740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.175 [2024-07-13 13:30:43.700260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.700310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.175 [2024-07-13 13:30:43.716443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.716477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.175 [2024-07-13 13:30:43.732225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.732276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.175 [2024-07-13 13:30:43.747641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.747674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.175 [2024-07-13 13:30:43.759862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.759906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.175 [2024-07-13 13:30:43.774734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.774768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.175 [2024-07-13 13:30:43.790083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.790134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.175 [2024-07-13 13:30:43.805443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.805477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.175 [2024-07-13 13:30:43.820817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.820874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.175 [2024-07-13 13:30:43.836007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.836041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.175 [2024-07-13 13:30:43.851127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.851161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.175 [2024-07-13 13:30:43.866065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.866098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.175 [2024-07-13 13:30:43.881491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.881524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.175 [2024-07-13 13:30:43.896943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.896988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.175 [2024-07-13 13:30:43.912739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.175 [2024-07-13 13:30:43.912791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:43.926243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:43.926277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:43.941289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:43.941323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:43.956702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:43.956735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:43.973134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:43.973186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:43.988320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:43.988353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:44.001717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:44.001750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:44.016686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:44.016719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:44.031878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:44.031912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:44.047628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:44.047663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:44.062998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:44.063033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:44.079026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:44.079062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:44.094784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:44.094819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:44.110541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:44.110575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:44.125759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:44.125794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:44.141755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:44.141791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:44.156887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:44.156923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.433 [2024-07-13 13:30:44.172890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.433 [2024-07-13 13:30:44.172924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.692 [2024-07-13 13:30:44.189074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.692 [2024-07-13 13:30:44.189111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.692 [2024-07-13 13:30:44.204064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.692 [2024-07-13 13:30:44.204101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.692 [2024-07-13 13:30:44.220260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.692 [2024-07-13 13:30:44.220310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.692 [2024-07-13 13:30:44.235927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.692 [2024-07-13 13:30:44.235963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.692 [2024-07-13 13:30:44.252488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.692 [2024-07-13 13:30:44.252544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.692 [2024-07-13 13:30:44.269211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.692 [2024-07-13 13:30:44.269262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.692 [2024-07-13 13:30:44.285779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.692 [2024-07-13 13:30:44.285816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.692 [2024-07-13 13:30:44.302193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.692 [2024-07-13 13:30:44.302228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.692 [2024-07-13 13:30:44.317580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.692 [2024-07-13 13:30:44.317615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.692 [2024-07-13 13:30:44.334013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.692 [2024-07-13 13:30:44.334050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.692 [2024-07-13 13:30:44.350798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.692 [2024-07-13 13:30:44.350833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.692 [2024-07-13 13:30:44.366235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.692 [2024-07-13 13:30:44.366284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.692 [2024-07-13 13:30:44.382558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.692 [2024-07-13 13:30:44.382607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.692 [2024-07-13 13:30:44.397559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.692 [2024-07-13 13:30:44.397609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.692 [2024-07-13 13:30:44.413330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.692 [2024-07-13 13:30:44.413381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.692 [2024-07-13 13:30:44.428512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.692 [2024-07-13 13:30:44.428564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.444080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.444126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.459761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.459798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.473717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.473754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.489996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.490033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.505656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.505692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.521318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.521354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.536257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.536292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.552489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.552525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.568272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.568321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.584121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.584158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.599291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.599343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.615207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.615242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.630178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.630229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.645239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.645275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.661085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.661121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.674399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.674449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.951 [2024-07-13 13:30:44.690214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.951 [2024-07-13 13:30:44.690249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.209 [2024-07-13 13:30:44.706315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.209 [2024-07-13 13:30:44.706351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.209 [2024-07-13 13:30:44.721717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.209 [2024-07-13 13:30:44.721753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.209 [2024-07-13 13:30:44.737770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.209 [2024-07-13 13:30:44.737814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.209 [2024-07-13 13:30:44.753525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.209 [2024-07-13 13:30:44.753560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.209 [2024-07-13 13:30:44.770011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.209 [2024-07-13 13:30:44.770048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.209 [2024-07-13 13:30:44.785772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.209 [2024-07-13 13:30:44.785808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.209 [2024-07-13 13:30:44.801682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.209 [2024-07-13 13:30:44.801718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.209 [2024-07-13 13:30:44.817338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.209 [2024-07-13 13:30:44.817375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.209 [2024-07-13 13:30:44.832776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.209 [2024-07-13 13:30:44.832812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.209 [2024-07-13 13:30:44.848411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.209 [2024-07-13 13:30:44.848446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.210 [2024-07-13 13:30:44.861729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.210 [2024-07-13 13:30:44.861764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.210 [2024-07-13 13:30:44.877591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.210 [2024-07-13 13:30:44.877626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.210 [2024-07-13 13:30:44.890521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.210 [2024-07-13 13:30:44.890556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.210 [2024-07-13 13:30:44.905351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.210 [2024-07-13 13:30:44.905386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.210 [2024-07-13 13:30:44.920849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.210 [2024-07-13 13:30:44.920911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.210 [2024-07-13 13:30:44.936644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.210 [2024-07-13 13:30:44.936680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.210 [2024-07-13 13:30:44.948732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.210 [2024-07-13 13:30:44.948778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.210 00:19:10.210 Latency(us) 00:19:10.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.210 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:10.210 Nvme1n1 : 5.01 8162.08 63.77 0.00 0.00 15654.90 5849.69 28738.75 00:19:10.210 =================================================================================================================== 00:19:10.210 Total : 8162.08 63.77 0.00 0.00 15654.90 5849.69 28738.75 00:19:10.210 [2024-07-13 13:30:44.953491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.210 [2024-07-13 13:30:44.953521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:44.961537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:44.961576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:44.969815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:44.969848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:44.977858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:44.977898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:44.989944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:44.989985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:44.997995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:44.998047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.006073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.006132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.014081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.014141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.021975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.022005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.030033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.030062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.038012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.038040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.046053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.046081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.054071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.054099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.062078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.062105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.070108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.070136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.078132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.078174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.086216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.086269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.094311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.094372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.102261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.102313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.110268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.110301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.118274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.118314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.126328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.126361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.134323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.134356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.142364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.142397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.150342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.150374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.158395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.158427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.166395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.166426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.174438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.174470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.182459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.182492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.190501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.190534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.198532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.198566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.469 [2024-07-13 13:30:45.206539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.469 [2024-07-13 13:30:45.206573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.214539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.214573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.222617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.222651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.230562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.230589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.238630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.238663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.246764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.246823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.254695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.254736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.262702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.262735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.270725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.270759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.278730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.278763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.286780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.286813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.294766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.294799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.302824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.302860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.310969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.311027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.318940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.319001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.327006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.327065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.334939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.334969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.342930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.342957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.350960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.350988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.358959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.358992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.366995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.367023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.375017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.375044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.383042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.383070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.391067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.391095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.399086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.399114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.407095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.407124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.415165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.415211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.423159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.423188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.431199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.431233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.439229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.439264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.447234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.447267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.455282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.455315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.463300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.463332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.471326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.471360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.767 [2024-07-13 13:30:45.479368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.767 [2024-07-13 13:30:45.479401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.487419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.487471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.495491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.495553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.503425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.503459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.511425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.511457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.519464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.519496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.527487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.527519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.535497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.535530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.543534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.543567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.551534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.551567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.559584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.559617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.567608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.567642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.575629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.575661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.583732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.583765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.591667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.591699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.599680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.599712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.607716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.607749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.615807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.615862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.623781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.623814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.631780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.631813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.639794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.639827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.647830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.647863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.655854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.655912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.663857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.663904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.671945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.671973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.679927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.679956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.687954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.687983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.695979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.696008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.703973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.704006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.712100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.712173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.720059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.720088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.728046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.728075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.736074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.736103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.744070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.744097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.752106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.752133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.760130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.760178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.768182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.768215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.776185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.776227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.046 [2024-07-13 13:30:45.784230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.046 [2024-07-13 13:30:45.784263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.792235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.792267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.800271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.800304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.808280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.808311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.816392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.816448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.824378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.824417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.832345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.832378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.840388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.840422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.848410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.848444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.856422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.856454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.864513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.864555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.872537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.872602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.880513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.880546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.888533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.888567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.896533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.896566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.904581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.904614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.912603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.912635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.920598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.920630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.928641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.928674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.936646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.936679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.944694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.944728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.952711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.952744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.960742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.960776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.968773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.968806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.976782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.976814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.984789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.984821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:45.992835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:45.992875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:46.000847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:46.000890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:46.008882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:46.008930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:46.016909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:46.016953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 [2024-07-13 13:30:46.024917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.305 [2024-07-13 13:30:46.024966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (280139) - No such process 00:19:11.305 13:30:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 280139 00:19:11.305 13:30:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:11.305 13:30:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.305 13:30:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:11.305 13:30:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.305 13:30:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:11.305 13:30:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.305 13:30:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:11.305 delay0 00:19:11.305 13:30:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.305 13:30:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:11.305 13:30:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.305 13:30:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:11.563 13:30:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.563 13:30:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:11.563 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.563 [2024-07-13 13:30:46.194504] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:19.673 Initializing NVMe Controllers 00:19:19.673 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:19.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:19.673 Initialization complete. Launching workers. 00:19:19.673 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 267, failed: 12488 00:19:19.673 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12665, failed to submit 90 00:19:19.673 success 12542, unsuccess 123, failed 0 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:19.673 rmmod nvme_tcp 00:19:19.673 rmmod nvme_fabrics 00:19:19.673 rmmod nvme_keyring 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 278547 ']' 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 278547 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 278547 ']' 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 278547 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 278547 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 278547' 00:19:19.673 killing process with pid 278547 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 278547 00:19:19.673 13:30:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 278547 00:19:20.241 13:30:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:20.241 13:30:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:20.241 13:30:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:20.241 13:30:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:20.241 13:30:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:20.241 13:30:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.241 13:30:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.241 13:30:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.775 13:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:22.775 00:19:22.775 real 0m33.417s 00:19:22.775 user 0m49.233s 00:19:22.775 sys 0m9.372s 00:19:22.775 13:30:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:22.775 13:30:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:22.775 ************************************ 00:19:22.775 END TEST nvmf_zcopy 00:19:22.775 ************************************ 00:19:22.775 13:30:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:22.775 13:30:57 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:22.775 13:30:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:22.775 13:30:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.775 13:30:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:22.775 ************************************ 00:19:22.775 START TEST nvmf_nmic 00:19:22.775 ************************************ 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:22.775 * Looking for test storage... 00:19:22.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.775 13:30:57 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:19:22.776 13:30:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:24.675 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:24.675 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:24.675 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:24.675 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:24.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:19:24.675 00:19:24.675 --- 10.0.0.2 ping statistics --- 00:19:24.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.675 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:19:24.675 00:19:24.675 --- 10.0.0.1 ping statistics --- 00:19:24.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.675 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.675 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=283909 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 283909 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 283909 ']' 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.676 13:30:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:24.676 [2024-07-13 13:30:59.355488] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:24.676 [2024-07-13 13:30:59.355627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.934 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.934 [2024-07-13 13:30:59.490930] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:25.192 [2024-07-13 13:30:59.756310] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.192 [2024-07-13 13:30:59.756382] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.192 [2024-07-13 13:30:59.756411] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.192 [2024-07-13 13:30:59.756433] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.192 [2024-07-13 13:30:59.756456] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.192 [2024-07-13 13:30:59.756600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.192 [2024-07-13 13:30:59.756658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.192 [2024-07-13 13:30:59.756705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.192 [2024-07-13 13:30:59.756727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:25.757 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.757 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:19:25.757 13:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:25.757 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:25.757 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.758 [2024-07-13 13:31:00.316546] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.758 Malloc0 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.758 [2024-07-13 13:31:00.424027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:25.758 test case1: single bdev can't be used in multiple subsystems 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.758 [2024-07-13 13:31:00.447878] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:25.758 [2024-07-13 13:31:00.447926] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:25.758 [2024-07-13 13:31:00.447956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.758 request: 00:19:25.758 { 00:19:25.758 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:25.758 "namespace": { 00:19:25.758 "bdev_name": "Malloc0", 00:19:25.758 "no_auto_visible": false 00:19:25.758 }, 00:19:25.758 "method": "nvmf_subsystem_add_ns", 00:19:25.758 "req_id": 1 00:19:25.758 } 00:19:25.758 Got JSON-RPC error response 00:19:25.758 response: 00:19:25.758 { 00:19:25.758 "code": -32602, 00:19:25.758 "message": "Invalid parameters" 00:19:25.758 } 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:25.758 Adding namespace failed - expected result. 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:25.758 test case2: host connect to nvmf target in multiple paths 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:25.758 [2024-07-13 13:31:00.456036] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.758 13:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:26.691 13:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:27.256 13:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:27.256 13:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:19:27.256 13:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:27.256 13:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:27.256 13:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:19:29.153 13:31:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:29.153 13:31:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:29.153 13:31:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:29.153 13:31:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:29.153 13:31:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:29.153 13:31:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:19:29.153 13:31:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:29.153 [global] 00:19:29.153 thread=1 00:19:29.153 invalidate=1 00:19:29.153 rw=write 00:19:29.153 time_based=1 00:19:29.153 runtime=1 00:19:29.153 ioengine=libaio 00:19:29.153 direct=1 00:19:29.153 bs=4096 00:19:29.153 iodepth=1 00:19:29.153 norandommap=0 00:19:29.153 numjobs=1 00:19:29.153 00:19:29.153 verify_dump=1 00:19:29.153 verify_backlog=512 00:19:29.153 verify_state_save=0 00:19:29.153 do_verify=1 00:19:29.153 verify=crc32c-intel 00:19:29.153 [job0] 00:19:29.153 filename=/dev/nvme0n1 00:19:29.153 Could not set queue depth (nvme0n1) 00:19:29.411 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:29.411 fio-3.35 00:19:29.411 Starting 1 thread 00:19:30.781 00:19:30.781 job0: (groupid=0, jobs=1): err= 0: pid=284548: Sat Jul 13 13:31:05 2024 00:19:30.781 read: IOPS=396, BW=1585KiB/s (1623kB/s)(1620KiB/1022msec) 00:19:30.781 slat (nsec): min=15128, max=66309, avg=33155.41, stdev=4658.74 00:19:30.781 clat (usec): min=457, max=41507, avg=1953.03, stdev=7399.49 00:19:30.781 lat (usec): min=491, max=41539, avg=1986.18, stdev=7397.02 00:19:30.781 clat percentiles (usec): 00:19:30.781 | 1.00th=[ 482], 5.00th=[ 510], 10.00th=[ 523], 20.00th=[ 537], 00:19:30.781 | 30.00th=[ 545], 40.00th=[ 553], 50.00th=[ 562], 60.00th=[ 562], 00:19:30.781 | 70.00th=[ 570], 80.00th=[ 578], 90.00th=[ 594], 95.00th=[ 603], 00:19:30.781 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:30.781 | 99.99th=[41681] 00:19:30.781 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:19:30.781 slat (usec): min=7, max=30368, avg=86.89, stdev=1340.94 00:19:30.781 clat (usec): min=227, max=545, avg=322.59, stdev=44.01 00:19:30.781 lat (usec): min=239, max=30761, avg=409.48, stdev=1345.01 00:19:30.781 clat percentiles (usec): 00:19:30.781 | 1.00th=[ 237], 5.00th=[ 249], 10.00th=[ 265], 20.00th=[ 285], 00:19:30.781 | 30.00th=[ 297], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 334], 00:19:30.781 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 379], 95.00th=[ 392], 00:19:30.781 | 99.00th=[ 424], 99.50th=[ 461], 99.90th=[ 545], 99.95th=[ 545], 00:19:30.781 | 99.99th=[ 545] 00:19:30.781 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:30.781 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:30.781 lat (usec) : 250=3.38%, 500=54.09%, 750=41.00% 00:19:30.781 lat (msec) : 50=1.53% 00:19:30.781 cpu : usr=1.86%, sys=2.35%, ctx=920, majf=0, minf=2 00:19:30.781 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:30.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.781 issued rwts: total=405,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.781 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:30.781 00:19:30.781 Run status group 0 (all jobs): 00:19:30.781 READ: bw=1585KiB/s (1623kB/s), 1585KiB/s-1585KiB/s (1623kB/s-1623kB/s), io=1620KiB (1659kB), run=1022-1022msec 00:19:30.781 WRITE: bw=2004KiB/s (2052kB/s), 2004KiB/s-2004KiB/s (2052kB/s-2052kB/s), io=2048KiB (2097kB), run=1022-1022msec 00:19:30.781 00:19:30.781 Disk stats (read/write): 00:19:30.781 nvme0n1: ios=427/512, merge=0/0, ticks=1605/154, in_queue=1759, util=98.70% 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:30.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:30.781 rmmod nvme_tcp 00:19:30.781 rmmod nvme_fabrics 00:19:30.781 rmmod nvme_keyring 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 283909 ']' 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 283909 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 283909 ']' 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 283909 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 283909 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 283909' 00:19:30.781 killing process with pid 283909 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 283909 00:19:30.781 13:31:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 283909 00:19:32.180 13:31:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:32.180 13:31:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:32.180 13:31:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:32.180 13:31:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:32.180 13:31:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:32.180 13:31:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.180 13:31:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.180 13:31:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.715 13:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:34.715 00:19:34.715 real 0m11.890s 00:19:34.715 user 0m28.094s 00:19:34.715 sys 0m2.515s 00:19:34.715 13:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:34.715 13:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:34.715 ************************************ 00:19:34.715 END TEST nvmf_nmic 00:19:34.715 ************************************ 00:19:34.715 13:31:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:34.715 13:31:08 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:34.715 13:31:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:34.715 13:31:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.715 13:31:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:34.715 ************************************ 00:19:34.715 START TEST nvmf_fio_target 00:19:34.715 ************************************ 00:19:34.715 13:31:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:34.715 * Looking for test storage... 00:19:34.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:34.715 13:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:34.716 13:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.618 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:36.619 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:36.619 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:36.619 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:36.619 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:36.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:19:36.619 00:19:36.619 --- 10.0.0.2 ping statistics --- 00:19:36.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.619 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:19:36.619 00:19:36.619 --- 10.0.0.1 ping statistics --- 00:19:36.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.619 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=286762 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 286762 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 286762 ']' 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.619 13:31:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.619 [2024-07-13 13:31:11.316076] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:36.619 [2024-07-13 13:31:11.316203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.878 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.878 [2024-07-13 13:31:11.455697] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:37.136 [2024-07-13 13:31:11.726138] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.136 [2024-07-13 13:31:11.726219] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.136 [2024-07-13 13:31:11.726247] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.136 [2024-07-13 13:31:11.726268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.136 [2024-07-13 13:31:11.726289] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.136 [2024-07-13 13:31:11.726439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.136 [2024-07-13 13:31:11.726507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.136 [2024-07-13 13:31:11.726543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.136 [2024-07-13 13:31:11.726554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:37.702 13:31:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.702 13:31:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:19:37.702 13:31:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:37.702 13:31:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:37.702 13:31:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.702 13:31:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.702 13:31:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:37.959 [2024-07-13 13:31:12.594598] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.959 13:31:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:38.524 13:31:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:38.524 13:31:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:38.781 13:31:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:38.781 13:31:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:39.039 13:31:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:39.039 13:31:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:39.297 13:31:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:39.297 13:31:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:39.556 13:31:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:39.814 13:31:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:39.814 13:31:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:40.380 13:31:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:40.380 13:31:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:40.638 13:31:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:40.638 13:31:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:40.896 13:31:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:41.154 13:31:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:41.154 13:31:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:41.412 13:31:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:41.412 13:31:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:41.670 13:31:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:41.670 [2024-07-13 13:31:16.397417] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.927 13:31:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:41.927 13:31:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:42.185 13:31:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:43.119 13:31:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:43.119 13:31:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:19:43.119 13:31:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:43.119 13:31:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:19:43.119 13:31:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:19:43.119 13:31:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:19:45.018 13:31:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:45.018 13:31:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:45.018 13:31:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:45.018 13:31:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:19:45.018 13:31:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:45.018 13:31:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:19:45.018 13:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:45.018 [global] 00:19:45.018 thread=1 00:19:45.018 invalidate=1 00:19:45.018 rw=write 00:19:45.018 time_based=1 00:19:45.018 runtime=1 00:19:45.018 ioengine=libaio 00:19:45.018 direct=1 00:19:45.018 bs=4096 00:19:45.018 iodepth=1 00:19:45.018 norandommap=0 00:19:45.018 numjobs=1 00:19:45.018 00:19:45.018 verify_dump=1 00:19:45.018 verify_backlog=512 00:19:45.018 verify_state_save=0 00:19:45.018 do_verify=1 00:19:45.018 verify=crc32c-intel 00:19:45.018 [job0] 00:19:45.018 filename=/dev/nvme0n1 00:19:45.018 [job1] 00:19:45.018 filename=/dev/nvme0n2 00:19:45.018 [job2] 00:19:45.018 filename=/dev/nvme0n3 00:19:45.018 [job3] 00:19:45.018 filename=/dev/nvme0n4 00:19:45.018 Could not set queue depth (nvme0n1) 00:19:45.018 Could not set queue depth (nvme0n2) 00:19:45.018 Could not set queue depth (nvme0n3) 00:19:45.018 Could not set queue depth (nvme0n4) 00:19:45.276 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.276 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.276 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.276 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.276 fio-3.35 00:19:45.276 Starting 4 threads 00:19:46.648 00:19:46.648 job0: (groupid=0, jobs=1): err= 0: pid=287961: Sat Jul 13 13:31:21 2024 00:19:46.648 read: IOPS=101, BW=407KiB/s (416kB/s)(420KiB/1033msec) 00:19:46.648 slat (nsec): min=5923, max=36603, avg=10609.27, stdev=8509.91 00:19:46.648 clat (usec): min=376, max=41060, avg=8135.44, stdev=15996.82 00:19:46.648 lat (usec): min=388, max=41079, avg=8146.05, stdev=16003.52 00:19:46.648 clat percentiles (usec): 00:19:46.648 | 1.00th=[ 383], 5.00th=[ 392], 10.00th=[ 392], 20.00th=[ 396], 00:19:46.648 | 30.00th=[ 404], 40.00th=[ 408], 50.00th=[ 416], 60.00th=[ 424], 00:19:46.648 | 70.00th=[ 429], 80.00th=[ 494], 90.00th=[41157], 95.00th=[41157], 00:19:46.648 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:46.648 | 99.99th=[41157] 00:19:46.648 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:19:46.648 slat (nsec): min=8123, max=39742, avg=12623.15, stdev=4985.63 00:19:46.648 clat (usec): min=221, max=1954, avg=325.65, stdev=136.88 00:19:46.648 lat (usec): min=230, max=1978, avg=338.27, stdev=137.77 00:19:46.648 clat percentiles (usec): 00:19:46.648 | 1.00th=[ 229], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 265], 00:19:46.648 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 310], 00:19:46.648 | 70.00th=[ 326], 80.00th=[ 392], 90.00th=[ 400], 95.00th=[ 433], 00:19:46.648 | 99.00th=[ 586], 99.50th=[ 1549], 99.90th=[ 1958], 99.95th=[ 1958], 00:19:46.648 | 99.99th=[ 1958] 00:19:46.648 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:19:46.648 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:46.648 lat (usec) : 250=9.40%, 500=85.58%, 750=1.13% 00:19:46.648 lat (msec) : 2=0.65%, 50=3.24% 00:19:46.648 cpu : usr=0.78%, sys=0.58%, ctx=619, majf=0, minf=1 00:19:46.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:46.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.648 issued rwts: total=105,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:46.648 job1: (groupid=0, jobs=1): err= 0: pid=287962: Sat Jul 13 13:31:21 2024 00:19:46.648 read: IOPS=20, BW=82.7KiB/s (84.7kB/s)(84.0KiB/1016msec) 00:19:46.648 slat (nsec): min=11178, max=18713, avg=16049.43, stdev=1376.88 00:19:46.648 clat (usec): min=40917, max=41063, avg=40974.91, stdev=42.21 00:19:46.648 lat (usec): min=40934, max=41078, avg=40990.96, stdev=42.15 00:19:46.648 clat percentiles (usec): 00:19:46.648 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:46.648 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:46.648 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:46.648 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:46.648 | 99.99th=[41157] 00:19:46.648 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:19:46.648 slat (nsec): min=8850, max=49103, avg=11144.10, stdev=3031.51 00:19:46.648 clat (usec): min=228, max=2718, avg=283.78, stdev=111.63 00:19:46.648 lat (usec): min=239, max=2727, avg=294.92, stdev=111.73 00:19:46.648 clat percentiles (usec): 00:19:46.648 | 1.00th=[ 239], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 260], 00:19:46.648 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:19:46.648 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 343], 00:19:46.648 | 99.00th=[ 396], 99.50th=[ 433], 99.90th=[ 2704], 99.95th=[ 2704], 00:19:46.648 | 99.99th=[ 2704] 00:19:46.648 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:19:46.648 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:46.648 lat (usec) : 250=7.69%, 500=88.18% 00:19:46.648 lat (msec) : 4=0.19%, 50=3.94% 00:19:46.648 cpu : usr=0.10%, sys=0.99%, ctx=535, majf=0, minf=2 00:19:46.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:46.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.648 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:46.648 job2: (groupid=0, jobs=1): err= 0: pid=287963: Sat Jul 13 13:31:21 2024 00:19:46.648 read: IOPS=452, BW=1811KiB/s (1854kB/s)(1836KiB/1014msec) 00:19:46.648 slat (nsec): min=6750, max=43486, avg=13889.25, stdev=4260.88 00:19:46.648 clat (usec): min=376, max=42039, avg=1812.08, stdev=7256.41 00:19:46.648 lat (usec): min=389, max=42056, avg=1825.97, stdev=7259.22 00:19:46.648 clat percentiles (usec): 00:19:46.648 | 1.00th=[ 412], 5.00th=[ 424], 10.00th=[ 441], 20.00th=[ 453], 00:19:46.649 | 30.00th=[ 461], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 478], 00:19:46.649 | 70.00th=[ 490], 80.00th=[ 502], 90.00th=[ 537], 95.00th=[ 594], 00:19:46.649 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:46.649 | 99.99th=[42206] 00:19:46.649 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:19:46.649 slat (nsec): min=6987, max=40565, avg=12810.23, stdev=5259.02 00:19:46.649 clat (usec): min=227, max=1658, avg=318.36, stdev=111.56 00:19:46.649 lat (usec): min=240, max=1678, avg=331.17, stdev=112.50 00:19:46.649 clat percentiles (usec): 00:19:46.649 | 1.00th=[ 237], 5.00th=[ 243], 10.00th=[ 245], 20.00th=[ 251], 00:19:46.649 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 310], 00:19:46.649 | 70.00th=[ 351], 80.00th=[ 383], 90.00th=[ 400], 95.00th=[ 437], 00:19:46.649 | 99.00th=[ 873], 99.50th=[ 1029], 99.90th=[ 1663], 99.95th=[ 1663], 00:19:46.649 | 99.99th=[ 1663] 00:19:46.649 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:19:46.649 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:46.649 lat (usec) : 250=8.65%, 500=80.33%, 750=8.65%, 1000=0.41% 00:19:46.649 lat (msec) : 2=0.41%, 50=1.54% 00:19:46.649 cpu : usr=0.59%, sys=1.48%, ctx=972, majf=0, minf=1 00:19:46.649 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:46.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.649 issued rwts: total=459,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.649 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:46.649 job3: (groupid=0, jobs=1): err= 0: pid=287964: Sat Jul 13 13:31:21 2024 00:19:46.649 read: IOPS=33, BW=134KiB/s (138kB/s)(136KiB/1012msec) 00:19:46.649 slat (nsec): min=9122, max=70943, avg=26971.56, stdev=13644.76 00:19:46.649 clat (usec): min=519, max=41422, avg=24466.22, stdev=20232.08 00:19:46.649 lat (usec): min=532, max=41434, avg=24493.19, stdev=20230.70 00:19:46.649 clat percentiles (usec): 00:19:46.649 | 1.00th=[ 519], 5.00th=[ 537], 10.00th=[ 603], 20.00th=[ 619], 00:19:46.649 | 30.00th=[ 676], 40.00th=[ 898], 50.00th=[41157], 60.00th=[41157], 00:19:46.649 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:46.649 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:46.649 | 99.99th=[41681] 00:19:46.649 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:19:46.649 slat (nsec): min=7824, max=36937, avg=12887.46, stdev=5514.47 00:19:46.649 clat (usec): min=236, max=2252, avg=328.65, stdev=118.34 00:19:46.649 lat (usec): min=244, max=2275, avg=341.53, stdev=119.98 00:19:46.649 clat percentiles (usec): 00:19:46.649 | 1.00th=[ 243], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 262], 00:19:46.649 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 297], 60.00th=[ 322], 00:19:46.649 | 70.00th=[ 355], 80.00th=[ 396], 90.00th=[ 429], 95.00th=[ 478], 00:19:46.649 | 99.00th=[ 545], 99.50th=[ 586], 99.90th=[ 2245], 99.95th=[ 2245], 00:19:46.649 | 99.99th=[ 2245] 00:19:46.649 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:19:46.649 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:46.649 lat (usec) : 250=7.33%, 500=83.33%, 750=5.13%, 1000=0.18% 00:19:46.649 lat (msec) : 2=0.18%, 4=0.18%, 50=3.66% 00:19:46.649 cpu : usr=0.20%, sys=1.09%, ctx=548, majf=0, minf=1 00:19:46.649 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:46.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.649 issued rwts: total=34,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.649 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:46.649 00:19:46.649 Run status group 0 (all jobs): 00:19:46.649 READ: bw=2397KiB/s (2454kB/s), 82.7KiB/s-1811KiB/s (84.7kB/s-1854kB/s), io=2476KiB (2535kB), run=1012-1033msec 00:19:46.649 WRITE: bw=7930KiB/s (8121kB/s), 1983KiB/s-2024KiB/s (2030kB/s-2072kB/s), io=8192KiB (8389kB), run=1012-1033msec 00:19:46.649 00:19:46.649 Disk stats (read/write): 00:19:46.649 nvme0n1: ios=152/512, merge=0/0, ticks=935/159, in_queue=1094, util=99.30% 00:19:46.649 nvme0n2: ios=40/512, merge=0/0, ticks=1643/143, in_queue=1786, util=99.59% 00:19:46.649 nvme0n3: ios=478/512, merge=0/0, ticks=1568/153, in_queue=1721, util=99.58% 00:19:46.649 nvme0n4: ios=84/512, merge=0/0, ticks=968/157, in_queue=1125, util=99.68% 00:19:46.649 13:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:46.649 [global] 00:19:46.649 thread=1 00:19:46.649 invalidate=1 00:19:46.649 rw=randwrite 00:19:46.649 time_based=1 00:19:46.649 runtime=1 00:19:46.649 ioengine=libaio 00:19:46.649 direct=1 00:19:46.649 bs=4096 00:19:46.649 iodepth=1 00:19:46.649 norandommap=0 00:19:46.649 numjobs=1 00:19:46.649 00:19:46.649 verify_dump=1 00:19:46.649 verify_backlog=512 00:19:46.649 verify_state_save=0 00:19:46.649 do_verify=1 00:19:46.649 verify=crc32c-intel 00:19:46.649 [job0] 00:19:46.649 filename=/dev/nvme0n1 00:19:46.649 [job1] 00:19:46.649 filename=/dev/nvme0n2 00:19:46.649 [job2] 00:19:46.649 filename=/dev/nvme0n3 00:19:46.649 [job3] 00:19:46.649 filename=/dev/nvme0n4 00:19:46.649 Could not set queue depth (nvme0n1) 00:19:46.649 Could not set queue depth (nvme0n2) 00:19:46.649 Could not set queue depth (nvme0n3) 00:19:46.649 Could not set queue depth (nvme0n4) 00:19:46.649 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:46.649 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:46.649 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:46.649 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:46.649 fio-3.35 00:19:46.649 Starting 4 threads 00:19:48.050 00:19:48.050 job0: (groupid=0, jobs=1): err= 0: pid=288196: Sat Jul 13 13:31:22 2024 00:19:48.050 read: IOPS=806, BW=3227KiB/s (3305kB/s)(3292KiB/1020msec) 00:19:48.050 slat (nsec): min=5818, max=34107, avg=10728.12, stdev=5908.94 00:19:48.050 clat (usec): min=313, max=41903, avg=818.15, stdev=4249.32 00:19:48.050 lat (usec): min=319, max=41931, avg=828.87, stdev=4249.66 00:19:48.050 clat percentiles (usec): 00:19:48.050 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 347], 00:19:48.050 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 371], 00:19:48.050 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 404], 95.00th=[ 429], 00:19:48.050 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:48.050 | 99.99th=[41681] 00:19:48.050 write: IOPS=1003, BW=4016KiB/s (4112kB/s)(4096KiB/1020msec); 0 zone resets 00:19:48.050 slat (nsec): min=7206, max=50064, avg=12336.65, stdev=6275.58 00:19:48.050 clat (usec): min=210, max=1129, avg=311.46, stdev=75.81 00:19:48.050 lat (usec): min=217, max=1161, avg=323.80, stdev=79.00 00:19:48.050 clat percentiles (usec): 00:19:48.050 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 243], 00:19:48.050 | 30.00th=[ 253], 40.00th=[ 269], 50.00th=[ 293], 60.00th=[ 322], 00:19:48.050 | 70.00th=[ 351], 80.00th=[ 388], 90.00th=[ 408], 95.00th=[ 437], 00:19:48.050 | 99.00th=[ 474], 99.50th=[ 482], 99.90th=[ 766], 99.95th=[ 1123], 00:19:48.050 | 99.99th=[ 1123] 00:19:48.050 bw ( KiB/s): min= 3248, max= 4944, per=29.66%, avg=4096.00, stdev=1199.25, samples=2 00:19:48.050 iops : min= 812, max= 1236, avg=1024.00, stdev=299.81, samples=2 00:19:48.050 lat (usec) : 250=15.21%, 500=83.43%, 750=0.54%, 1000=0.22% 00:19:48.050 lat (msec) : 2=0.11%, 50=0.49% 00:19:48.050 cpu : usr=1.67%, sys=2.65%, ctx=1848, majf=0, minf=2 00:19:48.050 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.050 issued rwts: total=823,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.050 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.050 job1: (groupid=0, jobs=1): err= 0: pid=288197: Sat Jul 13 13:31:22 2024 00:19:48.050 read: IOPS=353, BW=1415KiB/s (1449kB/s)(1416KiB/1001msec) 00:19:48.050 slat (nsec): min=4562, max=66873, avg=18954.74, stdev=10564.10 00:19:48.050 clat (usec): min=315, max=42008, avg=2245.46, stdev=8483.01 00:19:48.050 lat (usec): min=321, max=42020, avg=2264.41, stdev=8482.60 00:19:48.050 clat percentiles (usec): 00:19:48.050 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 355], 20.00th=[ 371], 00:19:48.050 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 412], 00:19:48.050 | 70.00th=[ 420], 80.00th=[ 433], 90.00th=[ 461], 95.00th=[ 627], 00:19:48.050 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:19:48.050 | 99.99th=[42206] 00:19:48.050 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:48.050 slat (nsec): min=6186, max=33261, avg=12699.33, stdev=4187.84 00:19:48.050 clat (usec): min=220, max=1013, avg=367.52, stdev=82.32 00:19:48.050 lat (usec): min=228, max=1021, avg=380.22, stdev=82.95 00:19:48.050 clat percentiles (usec): 00:19:48.050 | 1.00th=[ 235], 5.00th=[ 255], 10.00th=[ 269], 20.00th=[ 289], 00:19:48.050 | 30.00th=[ 322], 40.00th=[ 347], 50.00th=[ 367], 60.00th=[ 388], 00:19:48.050 | 70.00th=[ 404], 80.00th=[ 420], 90.00th=[ 465], 95.00th=[ 494], 00:19:48.050 | 99.00th=[ 553], 99.50th=[ 652], 99.90th=[ 1012], 99.95th=[ 1012], 00:19:48.050 | 99.99th=[ 1012] 00:19:48.050 bw ( KiB/s): min= 4096, max= 4096, per=29.66%, avg=4096.00, stdev= 0.00, samples=1 00:19:48.050 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:48.050 lat (usec) : 250=1.73%, 500=92.26%, 750=3.93%, 1000=0.12% 00:19:48.050 lat (msec) : 2=0.12%, 50=1.85% 00:19:48.050 cpu : usr=0.70%, sys=1.40%, ctx=866, majf=0, minf=1 00:19:48.050 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.050 issued rwts: total=354,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.050 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.050 job2: (groupid=0, jobs=1): err= 0: pid=288198: Sat Jul 13 13:31:22 2024 00:19:48.050 read: IOPS=488, BW=1955KiB/s (2002kB/s)(2004KiB/1025msec) 00:19:48.050 slat (nsec): min=5768, max=67091, avg=16272.72, stdev=9330.58 00:19:48.050 clat (usec): min=346, max=41534, avg=1644.74, stdev=6929.04 00:19:48.050 lat (usec): min=360, max=41548, avg=1661.02, stdev=6929.77 00:19:48.050 clat percentiles (usec): 00:19:48.050 | 1.00th=[ 355], 5.00th=[ 375], 10.00th=[ 383], 20.00th=[ 400], 00:19:48.050 | 30.00th=[ 408], 40.00th=[ 416], 50.00th=[ 424], 60.00th=[ 433], 00:19:48.050 | 70.00th=[ 445], 80.00th=[ 457], 90.00th=[ 486], 95.00th=[ 545], 00:19:48.050 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:48.050 | 99.99th=[41681] 00:19:48.050 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:19:48.050 slat (nsec): min=6890, max=52297, avg=14421.29, stdev=5559.49 00:19:48.050 clat (usec): min=237, max=564, avg=353.32, stdev=56.49 00:19:48.050 lat (usec): min=245, max=593, avg=367.74, stdev=57.33 00:19:48.050 clat percentiles (usec): 00:19:48.050 | 1.00th=[ 255], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 297], 00:19:48.050 | 30.00th=[ 318], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 371], 00:19:48.050 | 70.00th=[ 392], 80.00th=[ 404], 90.00th=[ 420], 95.00th=[ 441], 00:19:48.050 | 99.00th=[ 490], 99.50th=[ 498], 99.90th=[ 562], 99.95th=[ 562], 00:19:48.050 | 99.99th=[ 562] 00:19:48.050 bw ( KiB/s): min= 4096, max= 4096, per=29.66%, avg=4096.00, stdev= 0.00, samples=1 00:19:48.050 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:48.050 lat (usec) : 250=0.30%, 500=95.46%, 750=2.76% 00:19:48.050 lat (msec) : 50=1.48% 00:19:48.050 cpu : usr=0.39%, sys=1.86%, ctx=1015, majf=0, minf=1 00:19:48.050 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.050 issued rwts: total=501,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.050 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.051 job3: (groupid=0, jobs=1): err= 0: pid=288200: Sat Jul 13 13:31:22 2024 00:19:48.051 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:48.051 slat (nsec): min=5847, max=53054, avg=10925.54, stdev=6496.28 00:19:48.051 clat (usec): min=391, max=1830, avg=481.00, stdev=83.22 00:19:48.051 lat (usec): min=398, max=1837, avg=491.92, stdev=84.57 00:19:48.051 clat percentiles (usec): 00:19:48.051 | 1.00th=[ 412], 5.00th=[ 429], 10.00th=[ 437], 20.00th=[ 449], 00:19:48.051 | 30.00th=[ 457], 40.00th=[ 461], 50.00th=[ 469], 60.00th=[ 478], 00:19:48.051 | 70.00th=[ 486], 80.00th=[ 498], 90.00th=[ 515], 95.00th=[ 537], 00:19:48.051 | 99.00th=[ 816], 99.50th=[ 1057], 99.90th=[ 1565], 99.95th=[ 1827], 00:19:48.051 | 99.99th=[ 1827] 00:19:48.051 write: IOPS=1489, BW=5958KiB/s (6101kB/s)(5964KiB/1001msec); 0 zone resets 00:19:48.051 slat (nsec): min=6889, max=74582, avg=15337.13, stdev=9786.56 00:19:48.051 clat (usec): min=224, max=520, avg=311.46, stdev=56.22 00:19:48.051 lat (usec): min=233, max=562, avg=326.80, stdev=60.84 00:19:48.051 clat percentiles (usec): 00:19:48.051 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 258], 00:19:48.051 | 30.00th=[ 269], 40.00th=[ 289], 50.00th=[ 306], 60.00th=[ 318], 00:19:48.051 | 70.00th=[ 343], 80.00th=[ 363], 90.00th=[ 392], 95.00th=[ 412], 00:19:48.051 | 99.00th=[ 457], 99.50th=[ 469], 99.90th=[ 519], 99.95th=[ 523], 00:19:48.051 | 99.99th=[ 523] 00:19:48.051 bw ( KiB/s): min= 6128, max= 6128, per=44.37%, avg=6128.00, stdev= 0.00, samples=1 00:19:48.051 iops : min= 1532, max= 1532, avg=1532.00, stdev= 0.00, samples=1 00:19:48.051 lat (usec) : 250=8.75%, 500=83.86%, 750=6.84%, 1000=0.28% 00:19:48.051 lat (msec) : 2=0.28% 00:19:48.051 cpu : usr=1.90%, sys=4.90%, ctx=2516, majf=0, minf=1 00:19:48.051 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.051 issued rwts: total=1024,1491,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.051 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.051 00:19:48.051 Run status group 0 (all jobs): 00:19:48.051 READ: bw=10.3MiB/s (10.8MB/s), 1415KiB/s-4092KiB/s (1449kB/s-4190kB/s), io=10.6MiB (11.1MB), run=1001-1025msec 00:19:48.051 WRITE: bw=13.5MiB/s (14.1MB/s), 1998KiB/s-5958KiB/s (2046kB/s-6101kB/s), io=13.8MiB (14.5MB), run=1001-1025msec 00:19:48.051 00:19:48.051 Disk stats (read/write): 00:19:48.051 nvme0n1: ios=743/1024, merge=0/0, ticks=707/308, in_queue=1015, util=96.79% 00:19:48.051 nvme0n2: ios=336/512, merge=0/0, ticks=676/186, in_queue=862, util=88.52% 00:19:48.051 nvme0n3: ios=373/512, merge=0/0, ticks=1574/175, in_queue=1749, util=98.02% 00:19:48.051 nvme0n4: ios=1069/1076, merge=0/0, ticks=1216/318, in_queue=1534, util=97.89% 00:19:48.051 13:31:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:48.051 [global] 00:19:48.051 thread=1 00:19:48.051 invalidate=1 00:19:48.051 rw=write 00:19:48.051 time_based=1 00:19:48.051 runtime=1 00:19:48.051 ioengine=libaio 00:19:48.051 direct=1 00:19:48.051 bs=4096 00:19:48.051 iodepth=128 00:19:48.051 norandommap=0 00:19:48.051 numjobs=1 00:19:48.051 00:19:48.051 verify_dump=1 00:19:48.051 verify_backlog=512 00:19:48.051 verify_state_save=0 00:19:48.051 do_verify=1 00:19:48.051 verify=crc32c-intel 00:19:48.051 [job0] 00:19:48.051 filename=/dev/nvme0n1 00:19:48.051 [job1] 00:19:48.051 filename=/dev/nvme0n2 00:19:48.051 [job2] 00:19:48.051 filename=/dev/nvme0n3 00:19:48.051 [job3] 00:19:48.051 filename=/dev/nvme0n4 00:19:48.051 Could not set queue depth (nvme0n1) 00:19:48.051 Could not set queue depth (nvme0n2) 00:19:48.051 Could not set queue depth (nvme0n3) 00:19:48.051 Could not set queue depth (nvme0n4) 00:19:48.308 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.308 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.308 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.308 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.308 fio-3.35 00:19:48.308 Starting 4 threads 00:19:49.684 00:19:49.684 job0: (groupid=0, jobs=1): err= 0: pid=288521: Sat Jul 13 13:31:24 2024 00:19:49.684 read: IOPS=3603, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1012msec) 00:19:49.684 slat (usec): min=2, max=13501, avg=112.15, stdev=734.74 00:19:49.684 clat (usec): min=3891, max=53327, avg=14667.75, stdev=5008.64 00:19:49.684 lat (usec): min=7908, max=57165, avg=14779.90, stdev=5048.85 00:19:49.684 clat percentiles (usec): 00:19:49.684 | 1.00th=[ 9110], 5.00th=[10945], 10.00th=[12125], 20.00th=[12387], 00:19:49.684 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13173], 60.00th=[13566], 00:19:49.684 | 70.00th=[14484], 80.00th=[15795], 90.00th=[19268], 95.00th=[21890], 00:19:49.684 | 99.00th=[39584], 99.50th=[47973], 99.90th=[51643], 99.95th=[51643], 00:19:49.684 | 99.99th=[53216] 00:19:49.684 write: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec); 0 zone resets 00:19:49.684 slat (usec): min=4, max=10622, avg=139.07, stdev=725.58 00:19:49.684 clat (usec): min=1057, max=70852, avg=18258.48, stdev=12447.89 00:19:49.684 lat (usec): min=1088, max=75453, avg=18397.55, stdev=12536.83 00:19:49.684 clat percentiles (usec): 00:19:49.684 | 1.00th=[ 7635], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[11994], 00:19:49.684 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13698], 60.00th=[14222], 00:19:49.684 | 70.00th=[16712], 80.00th=[21103], 90.00th=[29754], 95.00th=[54264], 00:19:49.684 | 99.00th=[64750], 99.50th=[66847], 99.90th=[70779], 99.95th=[70779], 00:19:49.684 | 99.99th=[70779] 00:19:49.684 bw ( KiB/s): min=11848, max=20400, per=28.45%, avg=16124.00, stdev=6047.18, samples=2 00:19:49.684 iops : min= 2962, max= 5100, avg=4031.00, stdev=1511.79, samples=2 00:19:49.684 lat (msec) : 2=0.01%, 4=0.09%, 10=7.21%, 20=77.76%, 50=11.75% 00:19:49.684 lat (msec) : 100=3.18% 00:19:49.684 cpu : usr=3.46%, sys=5.93%, ctx=375, majf=0, minf=1 00:19:49.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:49.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:49.684 issued rwts: total=3647,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:49.684 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:49.684 job1: (groupid=0, jobs=1): err= 0: pid=288542: Sat Jul 13 13:31:24 2024 00:19:49.684 read: IOPS=4439, BW=17.3MiB/s (18.2MB/s)(17.5MiB/1008msec) 00:19:49.684 slat (usec): min=2, max=15533, avg=111.54, stdev=839.40 00:19:49.684 clat (usec): min=3300, max=34079, avg=14696.52, stdev=4461.80 00:19:49.684 lat (usec): min=3441, max=34124, avg=14808.06, stdev=4513.54 00:19:49.684 clat percentiles (usec): 00:19:49.684 | 1.00th=[ 4621], 5.00th=[ 7504], 10.00th=[10421], 20.00th=[12387], 00:19:49.684 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13566], 00:19:49.684 | 70.00th=[16450], 80.00th=[19006], 90.00th=[20055], 95.00th=[22414], 00:19:49.684 | 99.00th=[28967], 99.50th=[28967], 99.90th=[28967], 99.95th=[30802], 00:19:49.684 | 99.99th=[34341] 00:19:49.684 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:19:49.684 slat (usec): min=3, max=22624, avg=95.65, stdev=679.48 00:19:49.684 clat (usec): min=526, max=52375, avg=12857.11, stdev=5912.18 00:19:49.684 lat (usec): min=655, max=52380, avg=12952.76, stdev=5936.41 00:19:49.684 clat percentiles (usec): 00:19:49.684 | 1.00th=[ 2540], 5.00th=[ 5407], 10.00th=[ 6718], 20.00th=[ 7504], 00:19:49.684 | 30.00th=[10421], 40.00th=[11994], 50.00th=[12780], 60.00th=[13304], 00:19:49.684 | 70.00th=[13960], 80.00th=[15139], 90.00th=[19792], 95.00th=[26346], 00:19:49.684 | 99.00th=[26870], 99.50th=[31327], 99.90th=[50594], 99.95th=[50594], 00:19:49.684 | 99.99th=[52167] 00:19:49.684 bw ( KiB/s): min=16384, max=20480, per=32.52%, avg=18432.00, stdev=2896.31, samples=2 00:19:49.684 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:19:49.684 lat (usec) : 750=0.01% 00:19:49.684 lat (msec) : 2=0.24%, 4=1.31%, 10=17.79%, 20=70.29%, 50=10.29% 00:19:49.684 lat (msec) : 100=0.07% 00:19:49.684 cpu : usr=2.48%, sys=5.06%, ctx=326, majf=0, minf=1 00:19:49.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:49.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:49.685 issued rwts: total=4475,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:49.685 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:49.685 job2: (groupid=0, jobs=1): err= 0: pid=288546: Sat Jul 13 13:31:24 2024 00:19:49.685 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:19:49.685 slat (usec): min=3, max=19632, avg=200.00, stdev=1161.68 00:19:49.685 clat (usec): min=11463, max=64809, avg=24458.58, stdev=12913.60 00:19:49.685 lat (usec): min=11935, max=64815, avg=24658.58, stdev=12963.45 00:19:49.685 clat percentiles (usec): 00:19:49.685 | 1.00th=[12256], 5.00th=[13566], 10.00th=[15008], 20.00th=[15795], 00:19:49.685 | 30.00th=[16188], 40.00th=[16581], 50.00th=[18482], 60.00th=[20317], 00:19:49.685 | 70.00th=[26346], 80.00th=[30540], 90.00th=[49021], 95.00th=[54789], 00:19:49.685 | 99.00th=[60556], 99.50th=[64750], 99.90th=[64750], 99.95th=[64750], 00:19:49.685 | 99.99th=[64750] 00:19:49.685 write: IOPS=2633, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1003msec); 0 zone resets 00:19:49.685 slat (usec): min=4, max=11845, avg=177.74, stdev=962.62 00:19:49.685 clat (usec): min=458, max=49269, avg=24021.58, stdev=9834.67 00:19:49.685 lat (usec): min=3521, max=49291, avg=24199.32, stdev=9855.86 00:19:49.685 clat percentiles (usec): 00:19:49.685 | 1.00th=[ 3884], 5.00th=[12387], 10.00th=[14877], 20.00th=[15401], 00:19:49.685 | 30.00th=[15795], 40.00th=[18482], 50.00th=[23200], 60.00th=[26608], 00:19:49.685 | 70.00th=[27132], 80.00th=[31851], 90.00th=[39584], 95.00th=[43779], 00:19:49.685 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:19:49.685 | 99.99th=[49021] 00:19:49.685 bw ( KiB/s): min= 8240, max=12240, per=18.07%, avg=10240.00, stdev=2828.43, samples=2 00:19:49.685 iops : min= 2060, max= 3060, avg=2560.00, stdev=707.11, samples=2 00:19:49.685 lat (usec) : 500=0.02% 00:19:49.685 lat (msec) : 4=0.62%, 10=0.75%, 20=48.49%, 50=46.03%, 100=4.10% 00:19:49.685 cpu : usr=3.19%, sys=3.69%, ctx=264, majf=0, minf=1 00:19:49.685 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:49.685 issued rwts: total=2560,2641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:49.685 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:49.685 job3: (groupid=0, jobs=1): err= 0: pid=288547: Sat Jul 13 13:31:24 2024 00:19:49.685 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec) 00:19:49.685 slat (usec): min=2, max=17625, avg=161.46, stdev=1158.24 00:19:49.685 clat (usec): min=8678, max=57100, avg=21428.30, stdev=6853.36 00:19:49.685 lat (usec): min=8690, max=58434, avg=21589.76, stdev=6915.04 00:19:49.685 clat percentiles (usec): 00:19:49.685 | 1.00th=[ 9896], 5.00th=[13304], 10.00th=[15533], 20.00th=[16319], 00:19:49.685 | 30.00th=[17433], 40.00th=[17957], 50.00th=[19006], 60.00th=[20841], 00:19:49.685 | 70.00th=[23200], 80.00th=[26346], 90.00th=[31589], 95.00th=[36439], 00:19:49.685 | 99.00th=[38011], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:19:49.685 | 99.99th=[56886] 00:19:49.685 write: IOPS=2968, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1013msec); 0 zone resets 00:19:49.685 slat (usec): min=3, max=25166, avg=191.38, stdev=1329.08 00:19:49.685 clat (usec): min=7398, max=69331, avg=24293.33, stdev=8626.51 00:19:49.685 lat (usec): min=8377, max=69347, avg=24484.71, stdev=8717.91 00:19:49.685 clat percentiles (usec): 00:19:49.685 | 1.00th=[ 9372], 5.00th=[15270], 10.00th=[15664], 20.00th=[17433], 00:19:49.685 | 30.00th=[18744], 40.00th=[20579], 50.00th=[22414], 60.00th=[24249], 00:19:49.685 | 70.00th=[27395], 80.00th=[30278], 90.00th=[36439], 95.00th=[42206], 00:19:49.685 | 99.00th=[51119], 99.50th=[51119], 99.90th=[51119], 99.95th=[58983], 00:19:49.685 | 99.99th=[69731] 00:19:49.685 bw ( KiB/s): min=10744, max=12288, per=20.32%, avg=11516.00, stdev=1091.77, samples=2 00:19:49.685 iops : min= 2686, max= 3072, avg=2879.00, stdev=272.94, samples=2 00:19:49.685 lat (msec) : 10=1.46%, 20=43.34%, 50=53.71%, 100=1.49% 00:19:49.685 cpu : usr=1.98%, sys=2.96%, ctx=224, majf=0, minf=1 00:19:49.685 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:49.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:49.685 issued rwts: total=2560,3007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:49.685 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:49.685 00:19:49.685 Run status group 0 (all jobs): 00:19:49.685 READ: bw=51.1MiB/s (53.5MB/s), 9.87MiB/s-17.3MiB/s (10.4MB/s-18.2MB/s), io=51.7MiB (54.2MB), run=1003-1013msec 00:19:49.685 WRITE: bw=55.3MiB/s (58.0MB/s), 10.3MiB/s-17.9MiB/s (10.8MB/s-18.7MB/s), io=56.1MiB (58.8MB), run=1003-1013msec 00:19:49.685 00:19:49.685 Disk stats (read/write): 00:19:49.685 nvme0n1: ios=3616/3584, merge=0/0, ticks=28781/32906, in_queue=61687, util=99.00% 00:19:49.685 nvme0n2: ios=3635/3959, merge=0/0, ticks=45016/35233, in_queue=80249, util=97.56% 00:19:49.685 nvme0n3: ios=1951/2048, merge=0/0, ticks=13504/12878, in_queue=26382, util=88.91% 00:19:49.685 nvme0n4: ios=2093/2560, merge=0/0, ticks=25274/28236, in_queue=53510, util=96.94% 00:19:49.685 13:31:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:49.685 [global] 00:19:49.685 thread=1 00:19:49.685 invalidate=1 00:19:49.685 rw=randwrite 00:19:49.685 time_based=1 00:19:49.685 runtime=1 00:19:49.685 ioengine=libaio 00:19:49.685 direct=1 00:19:49.685 bs=4096 00:19:49.685 iodepth=128 00:19:49.685 norandommap=0 00:19:49.685 numjobs=1 00:19:49.685 00:19:49.685 verify_dump=1 00:19:49.685 verify_backlog=512 00:19:49.685 verify_state_save=0 00:19:49.685 do_verify=1 00:19:49.685 verify=crc32c-intel 00:19:49.685 [job0] 00:19:49.685 filename=/dev/nvme0n1 00:19:49.685 [job1] 00:19:49.685 filename=/dev/nvme0n2 00:19:49.685 [job2] 00:19:49.685 filename=/dev/nvme0n3 00:19:49.685 [job3] 00:19:49.685 filename=/dev/nvme0n4 00:19:49.685 Could not set queue depth (nvme0n1) 00:19:49.685 Could not set queue depth (nvme0n2) 00:19:49.685 Could not set queue depth (nvme0n3) 00:19:49.685 Could not set queue depth (nvme0n4) 00:19:49.685 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:49.685 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:49.685 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:49.685 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:49.685 fio-3.35 00:19:49.685 Starting 4 threads 00:19:51.059 00:19:51.059 job0: (groupid=0, jobs=1): err= 0: pid=288771: Sat Jul 13 13:31:25 2024 00:19:51.059 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:19:51.059 slat (usec): min=2, max=13499, avg=204.77, stdev=1222.71 00:19:51.059 clat (usec): min=3162, max=53761, avg=26856.14, stdev=11728.62 00:19:51.059 lat (usec): min=3190, max=56195, avg=27060.91, stdev=11785.82 00:19:51.059 clat percentiles (usec): 00:19:51.059 | 1.00th=[ 6980], 5.00th=[ 9372], 10.00th=[11731], 20.00th=[13173], 00:19:51.059 | 30.00th=[16319], 40.00th=[27395], 50.00th=[29754], 60.00th=[31589], 00:19:51.059 | 70.00th=[33817], 80.00th=[35390], 90.00th=[41157], 95.00th=[49021], 00:19:51.059 | 99.00th=[51643], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:19:51.059 | 99.99th=[53740] 00:19:51.059 write: IOPS=2454, BW=9817KiB/s (10.1MB/s)(9856KiB/1004msec); 0 zone resets 00:19:51.059 slat (usec): min=4, max=6643, avg=222.69, stdev=841.43 00:19:51.059 clat (usec): min=3265, max=73910, avg=29191.49, stdev=16183.77 00:19:51.059 lat (usec): min=3884, max=73933, avg=29414.18, stdev=16287.83 00:19:51.059 clat percentiles (usec): 00:19:51.059 | 1.00th=[ 6259], 5.00th=[10945], 10.00th=[12649], 20.00th=[13304], 00:19:51.059 | 30.00th=[13698], 40.00th=[14091], 50.00th=[33817], 60.00th=[35914], 00:19:51.059 | 70.00th=[38011], 80.00th=[41157], 90.00th=[46400], 95.00th=[62129], 00:19:51.059 | 99.00th=[71828], 99.50th=[71828], 99.90th=[73925], 99.95th=[73925], 00:19:51.059 | 99.99th=[73925] 00:19:51.059 bw ( KiB/s): min= 6408, max=12288, per=16.52%, avg=9348.00, stdev=4157.79, samples=2 00:19:51.059 iops : min= 1602, max= 3072, avg=2337.00, stdev=1039.45, samples=2 00:19:51.059 lat (msec) : 4=0.20%, 10=5.03%, 20=32.87%, 50=56.29%, 100=5.61% 00:19:51.059 cpu : usr=2.99%, sys=4.19%, ctx=311, majf=0, minf=13 00:19:51.059 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:51.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.059 issued rwts: total=2048,2464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.059 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.059 job1: (groupid=0, jobs=1): err= 0: pid=288772: Sat Jul 13 13:31:25 2024 00:19:51.059 read: IOPS=4721, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1002msec) 00:19:51.059 slat (usec): min=2, max=5679, avg=103.69, stdev=559.78 00:19:51.059 clat (usec): min=749, max=21034, avg=12593.01, stdev=2142.25 00:19:51.059 lat (usec): min=2455, max=21039, avg=12696.70, stdev=2160.18 00:19:51.059 clat percentiles (usec): 00:19:51.059 | 1.00th=[ 6849], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[11469], 00:19:51.059 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:19:51.059 | 70.00th=[13304], 80.00th=[14091], 90.00th=[15270], 95.00th=[16188], 00:19:51.059 | 99.00th=[17695], 99.50th=[18482], 99.90th=[21103], 99.95th=[21103], 00:19:51.059 | 99.99th=[21103] 00:19:51.059 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:19:51.059 slat (usec): min=3, max=5671, avg=93.20, stdev=506.73 00:19:51.059 clat (usec): min=6247, max=21989, avg=13111.13, stdev=2481.90 00:19:51.059 lat (usec): min=6266, max=21997, avg=13204.33, stdev=2503.09 00:19:51.059 clat percentiles (usec): 00:19:51.059 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[10552], 20.00th=[11994], 00:19:51.059 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:19:51.059 | 70.00th=[13566], 80.00th=[14091], 90.00th=[15926], 95.00th=[19792], 00:19:51.059 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21890], 99.95th=[21890], 00:19:51.059 | 99.99th=[21890] 00:19:51.059 bw ( KiB/s): min=20440, max=20480, per=36.17%, avg=20460.00, stdev=28.28, samples=2 00:19:51.059 iops : min= 5110, max= 5120, avg=5115.00, stdev= 7.07, samples=2 00:19:51.059 lat (usec) : 750=0.01% 00:19:51.059 lat (msec) : 4=0.05%, 10=10.92%, 20=87.14%, 50=1.88% 00:19:51.059 cpu : usr=3.90%, sys=6.19%, ctx=488, majf=0, minf=7 00:19:51.059 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:51.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.059 issued rwts: total=4731,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.059 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.059 job2: (groupid=0, jobs=1): err= 0: pid=288775: Sat Jul 13 13:31:25 2024 00:19:51.059 read: IOPS=4000, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1004msec) 00:19:51.059 slat (usec): min=3, max=7787, avg=118.99, stdev=704.70 00:19:51.059 clat (usec): min=1199, max=24868, avg=14862.11, stdev=2304.74 00:19:51.059 lat (usec): min=5585, max=24881, avg=14981.10, stdev=2352.52 00:19:51.059 clat percentiles (usec): 00:19:51.059 | 1.00th=[ 5997], 5.00th=[11076], 10.00th=[12518], 20.00th=[13698], 00:19:51.059 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14615], 60.00th=[15008], 00:19:51.059 | 70.00th=[15533], 80.00th=[16712], 90.00th=[17433], 95.00th=[18744], 00:19:51.059 | 99.00th=[20579], 99.50th=[21627], 99.90th=[23725], 99.95th=[23987], 00:19:51.059 | 99.99th=[24773] 00:19:51.059 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:19:51.059 slat (usec): min=4, max=14735, avg=119.15, stdev=718.66 00:19:51.059 clat (usec): min=7700, max=41878, avg=16260.16, stdev=4037.27 00:19:51.059 lat (usec): min=7707, max=41928, avg=16379.31, stdev=4090.77 00:19:51.059 clat percentiles (usec): 00:19:51.059 | 1.00th=[ 9372], 5.00th=[11863], 10.00th=[13566], 20.00th=[14222], 00:19:51.059 | 30.00th=[14484], 40.00th=[15008], 50.00th=[15270], 60.00th=[15533], 00:19:51.059 | 70.00th=[15926], 80.00th=[16712], 90.00th=[21103], 95.00th=[26870], 00:19:51.059 | 99.00th=[29492], 99.50th=[29754], 99.90th=[36439], 99.95th=[39584], 00:19:51.059 | 99.99th=[41681] 00:19:51.059 bw ( KiB/s): min=16384, max=16416, per=28.99%, avg=16400.00, stdev=22.63, samples=2 00:19:51.059 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:19:51.059 lat (msec) : 2=0.01%, 10=2.01%, 20=91.31%, 50=6.67% 00:19:51.059 cpu : usr=5.58%, sys=6.58%, ctx=424, majf=0, minf=13 00:19:51.059 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:51.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.059 issued rwts: total=4017,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.059 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.059 job3: (groupid=0, jobs=1): err= 0: pid=288776: Sat Jul 13 13:31:25 2024 00:19:51.059 read: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec) 00:19:51.059 slat (usec): min=3, max=10240, avg=191.28, stdev=1002.97 00:19:51.059 clat (usec): min=14394, max=38489, avg=24685.85, stdev=4203.69 00:19:51.059 lat (usec): min=15660, max=40533, avg=24877.13, stdev=4158.38 00:19:51.059 clat percentiles (usec): 00:19:51.059 | 1.00th=[16057], 5.00th=[18482], 10.00th=[18744], 20.00th=[20055], 00:19:51.059 | 30.00th=[23725], 40.00th=[24249], 50.00th=[25035], 60.00th=[25560], 00:19:51.059 | 70.00th=[26346], 80.00th=[27132], 90.00th=[29754], 95.00th=[32113], 00:19:51.059 | 99.00th=[35914], 99.50th=[37487], 99.90th=[38536], 99.95th=[38536], 00:19:51.059 | 99.99th=[38536] 00:19:51.059 write: IOPS=2513, BW=9.82MiB/s (10.3MB/s)(9.84MiB/1002msec); 0 zone resets 00:19:51.059 slat (usec): min=4, max=15011, avg=233.25, stdev=1053.50 00:19:51.059 clat (usec): min=1333, max=44258, avg=29995.98, stdev=6480.94 00:19:51.059 lat (usec): min=7150, max=44274, avg=30229.23, stdev=6481.68 00:19:51.059 clat percentiles (usec): 00:19:51.059 | 1.00th=[ 7439], 5.00th=[20579], 10.00th=[21627], 20.00th=[26084], 00:19:51.059 | 30.00th=[27132], 40.00th=[27919], 50.00th=[28705], 60.00th=[31589], 00:19:51.059 | 70.00th=[34866], 80.00th=[35914], 90.00th=[38536], 95.00th=[40109], 00:19:51.059 | 99.00th=[41681], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:19:51.059 | 99.99th=[44303] 00:19:51.059 bw ( KiB/s): min= 8704, max=10432, per=16.91%, avg=9568.00, stdev=1221.88, samples=2 00:19:51.060 iops : min= 2176, max= 2608, avg=2392.00, stdev=305.47, samples=2 00:19:51.060 lat (msec) : 2=0.02%, 10=0.70%, 20=8.96%, 50=90.32% 00:19:51.060 cpu : usr=2.70%, sys=4.10%, ctx=250, majf=0, minf=19 00:19:51.060 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:51.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:51.060 issued rwts: total=2048,2519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.060 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:51.060 00:19:51.060 Run status group 0 (all jobs): 00:19:51.060 READ: bw=50.0MiB/s (52.4MB/s), 8159KiB/s-18.4MiB/s (8355kB/s-19.3MB/s), io=50.2MiB (52.6MB), run=1002-1004msec 00:19:51.060 WRITE: bw=55.2MiB/s (57.9MB/s), 9817KiB/s-20.0MiB/s (10.1MB/s-20.9MB/s), io=55.5MiB (58.2MB), run=1002-1004msec 00:19:51.060 00:19:51.060 Disk stats (read/write): 00:19:51.060 nvme0n1: ios=2087/2103, merge=0/0, ticks=20931/19269, in_queue=40200, util=96.89% 00:19:51.060 nvme0n2: ios=4127/4268, merge=0/0, ticks=20972/19817, in_queue=40789, util=87.30% 00:19:51.060 nvme0n3: ios=3201/3584, merge=0/0, ticks=24092/27631, in_queue=51723, util=96.55% 00:19:51.060 nvme0n4: ios=1832/2048, merge=0/0, ticks=11361/18104, in_queue=29465, util=98.10% 00:19:51.060 13:31:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:51.060 13:31:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=288917 00:19:51.060 13:31:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:51.060 13:31:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:51.060 [global] 00:19:51.060 thread=1 00:19:51.060 invalidate=1 00:19:51.060 rw=read 00:19:51.060 time_based=1 00:19:51.060 runtime=10 00:19:51.060 ioengine=libaio 00:19:51.060 direct=1 00:19:51.060 bs=4096 00:19:51.060 iodepth=1 00:19:51.060 norandommap=1 00:19:51.060 numjobs=1 00:19:51.060 00:19:51.060 [job0] 00:19:51.060 filename=/dev/nvme0n1 00:19:51.060 [job1] 00:19:51.060 filename=/dev/nvme0n2 00:19:51.060 [job2] 00:19:51.060 filename=/dev/nvme0n3 00:19:51.060 [job3] 00:19:51.060 filename=/dev/nvme0n4 00:19:51.060 Could not set queue depth (nvme0n1) 00:19:51.060 Could not set queue depth (nvme0n2) 00:19:51.060 Could not set queue depth (nvme0n3) 00:19:51.060 Could not set queue depth (nvme0n4) 00:19:51.060 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:51.060 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:51.060 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:51.060 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:51.060 fio-3.35 00:19:51.060 Starting 4 threads 00:19:54.342 13:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:54.342 13:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:54.342 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=2125824, buflen=4096 00:19:54.342 fio: pid=289009, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:54.342 13:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:54.342 13:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:54.342 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=667648, buflen=4096 00:19:54.342 fio: pid=289008, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:54.600 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=30973952, buflen=4096 00:19:54.600 fio: pid=289006, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:54.600 13:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:54.600 13:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:55.167 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=1880064, buflen=4096 00:19:55.167 fio: pid=289007, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:55.167 13:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:55.167 13:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:55.167 00:19:55.167 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=289006: Sat Jul 13 13:31:29 2024 00:19:55.167 read: IOPS=2244, BW=8976KiB/s (9191kB/s)(29.5MiB/3370msec) 00:19:55.167 slat (usec): min=4, max=33402, avg=18.98, stdev=425.71 00:19:55.167 clat (usec): min=290, max=41313, avg=422.61, stdev=812.67 00:19:55.167 lat (usec): min=295, max=41320, avg=441.59, stdev=918.17 00:19:55.167 clat percentiles (usec): 00:19:55.167 | 1.00th=[ 310], 5.00th=[ 334], 10.00th=[ 359], 20.00th=[ 371], 00:19:55.167 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 396], 60.00th=[ 412], 00:19:55.167 | 70.00th=[ 424], 80.00th=[ 441], 90.00th=[ 469], 95.00th=[ 519], 00:19:55.167 | 99.00th=[ 553], 99.50th=[ 562], 99.90th=[ 1057], 99.95th=[ 3228], 00:19:55.167 | 99.99th=[41157] 00:19:55.167 bw ( KiB/s): min= 7672, max= 9960, per=97.35%, avg=9069.33, stdev=860.91, samples=6 00:19:55.167 iops : min= 1918, max= 2490, avg=2267.33, stdev=215.23, samples=6 00:19:55.167 lat (usec) : 500=93.18%, 750=6.60%, 1000=0.09% 00:19:55.167 lat (msec) : 2=0.07%, 4=0.01%, 50=0.04% 00:19:55.167 cpu : usr=1.72%, sys=3.53%, ctx=7570, majf=0, minf=1 00:19:55.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.167 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.167 issued rwts: total=7563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.167 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=289007: Sat Jul 13 13:31:29 2024 00:19:55.167 read: IOPS=123, BW=491KiB/s (503kB/s)(1836KiB/3737msec) 00:19:55.167 slat (usec): min=6, max=9788, avg=73.52, stdev=682.21 00:19:55.167 clat (usec): min=308, max=95928, avg=8063.47, stdev=16444.69 00:19:55.167 lat (usec): min=316, max=103684, avg=8122.40, stdev=16607.42 00:19:55.167 clat percentiles (usec): 00:19:55.167 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 375], 00:19:55.167 | 30.00th=[ 400], 40.00th=[ 445], 50.00th=[ 465], 60.00th=[ 478], 00:19:55.167 | 70.00th=[ 490], 80.00th=[ 562], 90.00th=[41157], 95.00th=[41157], 00:19:55.167 | 99.00th=[41681], 99.50th=[41681], 99.90th=[95945], 99.95th=[95945], 00:19:55.167 | 99.99th=[95945] 00:19:55.167 bw ( KiB/s): min= 87, max= 3056, per=5.57%, avg=519.86, stdev=1118.36, samples=7 00:19:55.167 iops : min= 21, max= 764, avg=129.86, stdev=279.64, samples=7 00:19:55.167 lat (usec) : 500=72.83%, 750=8.48%, 1000=0.22% 00:19:55.167 lat (msec) : 50=17.83%, 100=0.43% 00:19:55.167 cpu : usr=0.00%, sys=0.43%, ctx=466, majf=0, minf=1 00:19:55.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.167 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.167 issued rwts: total=460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.167 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=289008: Sat Jul 13 13:31:29 2024 00:19:55.167 read: IOPS=52, BW=207KiB/s (212kB/s)(652KiB/3143msec) 00:19:55.167 slat (nsec): min=12854, max=58430, avg=25959.60, stdev=9501.27 00:19:55.167 clat (usec): min=374, max=41453, avg=19107.96, stdev=20263.03 00:19:55.167 lat (usec): min=390, max=41468, avg=19133.86, stdev=20258.39 00:19:55.167 clat percentiles (usec): 00:19:55.167 | 1.00th=[ 383], 5.00th=[ 412], 10.00th=[ 424], 20.00th=[ 453], 00:19:55.167 | 30.00th=[ 465], 40.00th=[ 478], 50.00th=[ 502], 60.00th=[41157], 00:19:55.167 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:55.167 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:55.167 | 99.99th=[41681] 00:19:55.167 bw ( KiB/s): min= 96, max= 752, per=2.29%, avg=213.33, stdev=264.15, samples=6 00:19:55.167 iops : min= 24, max= 188, avg=53.33, stdev=66.04, samples=6 00:19:55.167 lat (usec) : 500=50.00%, 750=3.66% 00:19:55.167 lat (msec) : 50=45.73% 00:19:55.167 cpu : usr=0.00%, sys=0.22%, ctx=166, majf=0, minf=1 00:19:55.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.167 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.167 issued rwts: total=164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.167 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=289009: Sat Jul 13 13:31:29 2024 00:19:55.167 read: IOPS=179, BW=716KiB/s (733kB/s)(2076KiB/2899msec) 00:19:55.167 slat (nsec): min=6828, max=31887, avg=8905.31, stdev=3078.41 00:19:55.167 clat (usec): min=323, max=41228, avg=5530.09, stdev=13544.74 00:19:55.167 lat (usec): min=331, max=41239, avg=5538.99, stdev=13547.29 00:19:55.167 clat percentiles (usec): 00:19:55.167 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 355], 00:19:55.167 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 363], 60.00th=[ 367], 00:19:55.167 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[41157], 95.00th=[41157], 00:19:55.167 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:55.167 | 99.99th=[41157] 00:19:55.168 bw ( KiB/s): min= 96, max= 3688, per=8.74%, avg=814.40, stdev=1606.39, samples=5 00:19:55.168 iops : min= 24, max= 922, avg=203.60, stdev=401.60, samples=5 00:19:55.168 lat (usec) : 500=86.15%, 750=0.96% 00:19:55.168 lat (msec) : 50=12.69% 00:19:55.168 cpu : usr=0.03%, sys=0.28%, ctx=524, majf=0, minf=1 00:19:55.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.168 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.168 issued rwts: total=520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.168 00:19:55.168 Run status group 0 (all jobs): 00:19:55.168 READ: bw=9315KiB/s (9539kB/s), 207KiB/s-8976KiB/s (212kB/s-9191kB/s), io=34.0MiB (35.6MB), run=2899-3737msec 00:19:55.168 00:19:55.168 Disk stats (read/write): 00:19:55.168 nvme0n1: ios=7525/0, merge=0/0, ticks=3091/0, in_queue=3091, util=94.08% 00:19:55.168 nvme0n2: ios=497/0, merge=0/0, ticks=4139/0, in_queue=4139, util=99.22% 00:19:55.168 nvme0n3: ios=210/0, merge=0/0, ticks=4126/0, in_queue=4126, util=99.41% 00:19:55.168 nvme0n4: ios=555/0, merge=0/0, ticks=2981/0, in_queue=2981, util=99.63% 00:19:55.426 13:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:55.426 13:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:55.683 13:31:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:55.683 13:31:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:55.942 13:31:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:55.942 13:31:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:56.200 13:31:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:56.200 13:31:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:56.458 13:31:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:56.458 13:31:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 288917 00:19:56.458 13:31:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:56.458 13:31:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:57.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:57.388 13:31:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:57.388 13:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:57.388 13:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:57.388 13:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.388 13:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:57.388 13:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.388 13:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:57.388 13:31:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:57.388 13:31:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:57.388 nvmf hotplug test: fio failed as expected 00:19:57.388 13:31:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:57.645 rmmod nvme_tcp 00:19:57.645 rmmod nvme_fabrics 00:19:57.645 rmmod nvme_keyring 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 286762 ']' 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 286762 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 286762 ']' 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 286762 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 286762 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 286762' 00:19:57.645 killing process with pid 286762 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 286762 00:19:57.645 13:31:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 286762 00:19:59.017 13:31:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:59.017 13:31:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:59.017 13:31:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:59.017 13:31:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.017 13:31:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:59.017 13:31:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.017 13:31:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.017 13:31:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.550 13:31:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:01.550 00:20:01.550 real 0m26.716s 00:20:01.550 user 1m32.605s 00:20:01.550 sys 0m6.281s 00:20:01.550 13:31:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:01.550 13:31:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.550 ************************************ 00:20:01.550 END TEST nvmf_fio_target 00:20:01.550 ************************************ 00:20:01.550 13:31:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:01.550 13:31:35 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:01.550 13:31:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:01.550 13:31:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.550 13:31:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:01.550 ************************************ 00:20:01.550 START TEST nvmf_bdevio 00:20:01.550 ************************************ 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:01.550 * Looking for test storage... 00:20:01.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.550 13:31:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:20:01.551 13:31:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:03.452 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:03.452 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:03.452 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:03.452 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:03.452 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:03.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:20:03.453 00:20:03.453 --- 10.0.0.2 ping statistics --- 00:20:03.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.453 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:03.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:20:03.453 00:20:03.453 --- 10.0.0.1 ping statistics --- 00:20:03.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.453 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=291885 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 291885 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 291885 ']' 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.453 13:31:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:03.453 [2024-07-13 13:31:37.961350] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:03.453 [2024-07-13 13:31:37.961492] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.453 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.453 [2024-07-13 13:31:38.104798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.711 [2024-07-13 13:31:38.373907] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.711 [2024-07-13 13:31:38.373996] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.711 [2024-07-13 13:31:38.374026] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.711 [2024-07-13 13:31:38.374048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.711 [2024-07-13 13:31:38.374070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.711 [2024-07-13 13:31:38.374199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:03.711 [2024-07-13 13:31:38.374242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:03.711 [2024-07-13 13:31:38.374284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.711 [2024-07-13 13:31:38.374295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:04.278 [2024-07-13 13:31:38.888061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:04.278 Malloc0 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:04.278 [2024-07-13 13:31:38.991070] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.278 { 00:20:04.278 "params": { 00:20:04.278 "name": "Nvme$subsystem", 00:20:04.278 "trtype": "$TEST_TRANSPORT", 00:20:04.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.278 "adrfam": "ipv4", 00:20:04.278 "trsvcid": "$NVMF_PORT", 00:20:04.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.278 "hdgst": ${hdgst:-false}, 00:20:04.278 "ddgst": ${ddgst:-false} 00:20:04.278 }, 00:20:04.278 "method": "bdev_nvme_attach_controller" 00:20:04.278 } 00:20:04.278 EOF 00:20:04.278 )") 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:20:04.278 13:31:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:04.278 "params": { 00:20:04.278 "name": "Nvme1", 00:20:04.278 "trtype": "tcp", 00:20:04.278 "traddr": "10.0.0.2", 00:20:04.278 "adrfam": "ipv4", 00:20:04.278 "trsvcid": "4420", 00:20:04.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:04.278 "hdgst": false, 00:20:04.278 "ddgst": false 00:20:04.278 }, 00:20:04.278 "method": "bdev_nvme_attach_controller" 00:20:04.278 }' 00:20:04.537 [2024-07-13 13:31:39.069133] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:04.537 [2024-07-13 13:31:39.069277] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292040 ] 00:20:04.537 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.537 [2024-07-13 13:31:39.193831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:04.844 [2024-07-13 13:31:39.439010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.844 [2024-07-13 13:31:39.439053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.844 [2024-07-13 13:31:39.439062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.122 I/O targets: 00:20:05.122 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:05.122 00:20:05.122 00:20:05.122 CUnit - A unit testing framework for C - Version 2.1-3 00:20:05.122 http://cunit.sourceforge.net/ 00:20:05.122 00:20:05.122 00:20:05.122 Suite: bdevio tests on: Nvme1n1 00:20:05.381 Test: blockdev write read block ...passed 00:20:05.381 Test: blockdev write zeroes read block ...passed 00:20:05.381 Test: blockdev write zeroes read no split ...passed 00:20:05.381 Test: blockdev write zeroes read split ...passed 00:20:05.381 Test: blockdev write zeroes read split partial ...passed 00:20:05.381 Test: blockdev reset ...[2024-07-13 13:31:40.115798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:05.381 [2024-07-13 13:31:40.115986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:20:05.641 [2024-07-13 13:31:40.135300] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:05.641 passed 00:20:05.641 Test: blockdev write read 8 blocks ...passed 00:20:05.641 Test: blockdev write read size > 128k ...passed 00:20:05.641 Test: blockdev write read invalid size ...passed 00:20:05.641 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:05.641 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:05.641 Test: blockdev write read max offset ...passed 00:20:05.641 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:05.641 Test: blockdev writev readv 8 blocks ...passed 00:20:05.641 Test: blockdev writev readv 30 x 1block ...passed 00:20:05.641 Test: blockdev writev readv block ...passed 00:20:05.641 Test: blockdev writev readv size > 128k ...passed 00:20:05.641 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:05.641 Test: blockdev comparev and writev ...[2024-07-13 13:31:40.356804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.641 [2024-07-13 13:31:40.356889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.641 [2024-07-13 13:31:40.356931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.641 [2024-07-13 13:31:40.356958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:05.641 [2024-07-13 13:31:40.357465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.641 [2024-07-13 13:31:40.357503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:05.641 [2024-07-13 13:31:40.357538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.641 [2024-07-13 13:31:40.357564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:05.641 [2024-07-13 13:31:40.358083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.641 [2024-07-13 13:31:40.358115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:05.641 [2024-07-13 13:31:40.358154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.641 [2024-07-13 13:31:40.358180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:05.641 [2024-07-13 13:31:40.358687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.641 [2024-07-13 13:31:40.358719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:05.641 [2024-07-13 13:31:40.358752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.641 [2024-07-13 13:31:40.358786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:05.899 passed 00:20:05.899 Test: blockdev nvme passthru rw ...passed 00:20:05.899 Test: blockdev nvme passthru vendor specific ...[2024-07-13 13:31:40.442415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.899 [2024-07-13 13:31:40.442478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:05.899 [2024-07-13 13:31:40.442795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.899 [2024-07-13 13:31:40.442829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:05.899 [2024-07-13 13:31:40.443075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.899 [2024-07-13 13:31:40.443107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:05.900 [2024-07-13 13:31:40.443388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.900 [2024-07-13 13:31:40.443421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:05.900 passed 00:20:05.900 Test: blockdev nvme admin passthru ...passed 00:20:05.900 Test: blockdev copy ...passed 00:20:05.900 00:20:05.900 Run Summary: Type Total Ran Passed Failed Inactive 00:20:05.900 suites 1 1 n/a 0 0 00:20:05.900 tests 23 23 23 0 0 00:20:05.900 asserts 152 152 152 0 n/a 00:20:05.900 00:20:05.900 Elapsed time = 1.318 seconds 00:20:06.833 13:31:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:06.833 13:31:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.833 13:31:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:06.833 13:31:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.834 rmmod nvme_tcp 00:20:06.834 rmmod nvme_fabrics 00:20:06.834 rmmod nvme_keyring 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 291885 ']' 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 291885 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 291885 ']' 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 291885 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.834 13:31:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 291885 00:20:07.092 13:31:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:07.092 13:31:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:07.092 13:31:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 291885' 00:20:07.092 killing process with pid 291885 00:20:07.092 13:31:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 291885 00:20:07.092 13:31:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 291885 00:20:08.464 13:31:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:08.464 13:31:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:08.464 13:31:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:08.464 13:31:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.464 13:31:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:08.464 13:31:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.464 13:31:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.464 13:31:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.361 13:31:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:10.361 00:20:10.361 real 0m9.300s 00:20:10.361 user 0m22.182s 00:20:10.361 sys 0m2.331s 00:20:10.361 13:31:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.361 13:31:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:10.361 ************************************ 00:20:10.361 END TEST nvmf_bdevio 00:20:10.361 ************************************ 00:20:10.361 13:31:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:10.361 13:31:45 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:10.361 13:31:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:10.361 13:31:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.361 13:31:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:10.620 ************************************ 00:20:10.620 START TEST nvmf_auth_target 00:20:10.620 ************************************ 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:10.620 * Looking for test storage... 00:20:10.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:10.620 13:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:12.515 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:12.515 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:12.515 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:12.515 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:12.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:20:12.515 00:20:12.515 --- 10.0.0.2 ping statistics --- 00:20:12.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.515 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:20:12.515 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:12.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:20:12.516 00:20:12.516 --- 10.0.0.1 ping statistics --- 00:20:12.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.516 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=294376 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 294376 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 294376 ']' 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.516 13:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=294524 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bbc3b48b8312e32931bc22efe6b61ddc6e62c58c4e52154a 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.lfO 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bbc3b48b8312e32931bc22efe6b61ddc6e62c58c4e52154a 0 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bbc3b48b8312e32931bc22efe6b61ddc6e62c58c4e52154a 0 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bbc3b48b8312e32931bc22efe6b61ddc6e62c58c4e52154a 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.lfO 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.lfO 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.lfO 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=01ffb529398925f2ec558e85e3b90f36e1b1474a556e9c742e2da29f8e8b7242 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gaC 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 01ffb529398925f2ec558e85e3b90f36e1b1474a556e9c742e2da29f8e8b7242 3 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 01ffb529398925f2ec558e85e3b90f36e1b1474a556e9c742e2da29f8e8b7242 3 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=01ffb529398925f2ec558e85e3b90f36e1b1474a556e9c742e2da29f8e8b7242 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gaC 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gaC 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.gaC 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2e36edbe7ad3f23dba70b8519a192081 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.wFv 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2e36edbe7ad3f23dba70b8519a192081 1 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2e36edbe7ad3f23dba70b8519a192081 1 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2e36edbe7ad3f23dba70b8519a192081 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.wFv 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.wFv 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.wFv 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4448d84b856eed84354ae0568aba2420d1d564f234b5af24 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.2Dc 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4448d84b856eed84354ae0568aba2420d1d564f234b5af24 2 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4448d84b856eed84354ae0568aba2420d1d564f234b5af24 2 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4448d84b856eed84354ae0568aba2420d1d564f234b5af24 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:13.885 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.2Dc 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.2Dc 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.2Dc 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=533106d66a36cb502ea61c51ec5270e4ad4e6d17828cebc7 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Nso 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 533106d66a36cb502ea61c51ec5270e4ad4e6d17828cebc7 2 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 533106d66a36cb502ea61c51ec5270e4ad4e6d17828cebc7 2 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=533106d66a36cb502ea61c51ec5270e4ad4e6d17828cebc7 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Nso 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Nso 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Nso 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b5f7d76b595ac048fafc126fb1b93e41 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.2kF 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b5f7d76b595ac048fafc126fb1b93e41 1 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b5f7d76b595ac048fafc126fb1b93e41 1 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b5f7d76b595ac048fafc126fb1b93e41 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.2kF 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.2kF 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.2kF 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ee530b6ef71504bf45366279627019bcc463c1de9814f9dfe1d1bc6547af2043 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qKF 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ee530b6ef71504bf45366279627019bcc463c1de9814f9dfe1d1bc6547af2043 3 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ee530b6ef71504bf45366279627019bcc463c1de9814f9dfe1d1bc6547af2043 3 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ee530b6ef71504bf45366279627019bcc463c1de9814f9dfe1d1bc6547af2043 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qKF 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qKF 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.qKF 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 294376 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 294376 ']' 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.886 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.143 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.143 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:14.143 13:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 294524 /var/tmp/host.sock 00:20:14.143 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 294524 ']' 00:20:14.143 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:20:14.143 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.143 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:14.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:14.143 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.143 13:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.075 13:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.075 13:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:15.075 13:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:20:15.075 13:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.075 13:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.075 13:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.075 13:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:15.075 13:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.lfO 00:20:15.075 13:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.075 13:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.075 13:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.076 13:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.lfO 00:20:15.076 13:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.lfO 00:20:15.333 13:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.gaC ]] 00:20:15.333 13:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gaC 00:20:15.333 13:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.333 13:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.333 13:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.333 13:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gaC 00:20:15.333 13:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gaC 00:20:15.591 13:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:15.591 13:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.wFv 00:20:15.591 13:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.591 13:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.591 13:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.591 13:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.wFv 00:20:15.591 13:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.wFv 00:20:15.849 13:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.2Dc ]] 00:20:15.849 13:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2Dc 00:20:15.849 13:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.849 13:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.849 13:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.849 13:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2Dc 00:20:15.849 13:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2Dc 00:20:16.107 13:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:16.107 13:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Nso 00:20:16.107 13:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.107 13:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.107 13:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.107 13:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Nso 00:20:16.107 13:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Nso 00:20:16.365 13:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.2kF ]] 00:20:16.365 13:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2kF 00:20:16.365 13:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.365 13:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.365 13:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.365 13:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2kF 00:20:16.365 13:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2kF 00:20:16.623 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:16.623 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qKF 00:20:16.623 13:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.623 13:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.623 13:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.623 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.qKF 00:20:16.623 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.qKF 00:20:16.880 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:20:16.880 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:16.880 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.880 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.880 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:16.880 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:17.138 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:20:17.138 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.138 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:17.138 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:17.138 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:17.138 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.138 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.138 13:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.138 13:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.138 13:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.138 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.138 13:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.396 00:20:17.396 13:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.396 13:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.396 13:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.654 13:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.654 13:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.654 13:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.654 13:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.654 13:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.654 13:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.654 { 00:20:17.654 "cntlid": 1, 00:20:17.654 "qid": 0, 00:20:17.654 "state": "enabled", 00:20:17.654 "thread": "nvmf_tgt_poll_group_000", 00:20:17.654 "listen_address": { 00:20:17.654 "trtype": "TCP", 00:20:17.654 "adrfam": "IPv4", 00:20:17.654 "traddr": "10.0.0.2", 00:20:17.654 "trsvcid": "4420" 00:20:17.654 }, 00:20:17.654 "peer_address": { 00:20:17.654 "trtype": "TCP", 00:20:17.654 "adrfam": "IPv4", 00:20:17.654 "traddr": "10.0.0.1", 00:20:17.654 "trsvcid": "37888" 00:20:17.654 }, 00:20:17.654 "auth": { 00:20:17.654 "state": "completed", 00:20:17.654 "digest": "sha256", 00:20:17.654 "dhgroup": "null" 00:20:17.654 } 00:20:17.654 } 00:20:17.654 ]' 00:20:17.654 13:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.654 13:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.654 13:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.654 13:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:17.654 13:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.912 13:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.912 13:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.912 13:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.170 13:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:20:19.100 13:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.101 13:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.101 13:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.101 13:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.101 13:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.101 13:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.101 13:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:19.101 13:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:19.357 13:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:20:19.357 13:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.357 13:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:19.357 13:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:19.357 13:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:19.357 13:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.357 13:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.357 13:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.357 13:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.357 13:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.357 13:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.357 13:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.614 00:20:19.614 13:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.614 13:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.614 13:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.883 13:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.883 13:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.883 13:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.883 13:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.883 13:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.883 13:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.883 { 00:20:19.883 "cntlid": 3, 00:20:19.883 "qid": 0, 00:20:19.883 "state": "enabled", 00:20:19.883 "thread": "nvmf_tgt_poll_group_000", 00:20:19.883 "listen_address": { 00:20:19.883 "trtype": "TCP", 00:20:19.883 "adrfam": "IPv4", 00:20:19.883 "traddr": "10.0.0.2", 00:20:19.883 "trsvcid": "4420" 00:20:19.883 }, 00:20:19.883 "peer_address": { 00:20:19.883 "trtype": "TCP", 00:20:19.883 "adrfam": "IPv4", 00:20:19.883 "traddr": "10.0.0.1", 00:20:19.883 "trsvcid": "37914" 00:20:19.883 }, 00:20:19.883 "auth": { 00:20:19.883 "state": "completed", 00:20:19.883 "digest": "sha256", 00:20:19.883 "dhgroup": "null" 00:20:19.883 } 00:20:19.883 } 00:20:19.883 ]' 00:20:19.883 13:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.883 13:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.883 13:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.883 13:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:19.883 13:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.883 13:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.883 13:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.883 13:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.145 13:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:20:21.078 13:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.078 13:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.078 13:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.078 13:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.078 13:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.078 13:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.078 13:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:21.078 13:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:21.337 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:20:21.337 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.337 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:21.337 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:21.337 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:21.337 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.337 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.337 13:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.337 13:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.337 13:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.337 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.337 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.903 00:20:21.903 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.903 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.903 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.903 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.903 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.903 13:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.903 13:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.903 13:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.903 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.903 { 00:20:21.903 "cntlid": 5, 00:20:21.903 "qid": 0, 00:20:21.903 "state": "enabled", 00:20:21.903 "thread": "nvmf_tgt_poll_group_000", 00:20:21.903 "listen_address": { 00:20:21.903 "trtype": "TCP", 00:20:21.903 "adrfam": "IPv4", 00:20:21.903 "traddr": "10.0.0.2", 00:20:21.903 "trsvcid": "4420" 00:20:21.903 }, 00:20:21.903 "peer_address": { 00:20:21.903 "trtype": "TCP", 00:20:21.903 "adrfam": "IPv4", 00:20:21.903 "traddr": "10.0.0.1", 00:20:21.903 "trsvcid": "37924" 00:20:21.903 }, 00:20:21.903 "auth": { 00:20:21.903 "state": "completed", 00:20:21.903 "digest": "sha256", 00:20:21.903 "dhgroup": "null" 00:20:21.903 } 00:20:21.903 } 00:20:21.903 ]' 00:20:21.903 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.160 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.160 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.160 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:22.160 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.160 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.160 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.160 13:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.418 13:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:20:23.357 13:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.357 13:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.357 13:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.357 13:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.357 13:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.357 13:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.357 13:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:23.357 13:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:23.615 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:20:23.615 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.615 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:23.615 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:23.615 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:23.615 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.615 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:23.615 13:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.615 13:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.615 13:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.615 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.615 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.873 00:20:23.873 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.873 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.873 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.130 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.130 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.130 13:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.130 13:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.388 13:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.388 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.388 { 00:20:24.388 "cntlid": 7, 00:20:24.388 "qid": 0, 00:20:24.388 "state": "enabled", 00:20:24.388 "thread": "nvmf_tgt_poll_group_000", 00:20:24.388 "listen_address": { 00:20:24.388 "trtype": "TCP", 00:20:24.388 "adrfam": "IPv4", 00:20:24.388 "traddr": "10.0.0.2", 00:20:24.388 "trsvcid": "4420" 00:20:24.388 }, 00:20:24.388 "peer_address": { 00:20:24.388 "trtype": "TCP", 00:20:24.388 "adrfam": "IPv4", 00:20:24.388 "traddr": "10.0.0.1", 00:20:24.388 "trsvcid": "35656" 00:20:24.388 }, 00:20:24.388 "auth": { 00:20:24.388 "state": "completed", 00:20:24.388 "digest": "sha256", 00:20:24.388 "dhgroup": "null" 00:20:24.388 } 00:20:24.388 } 00:20:24.388 ]' 00:20:24.388 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.388 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.388 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.388 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:24.388 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.388 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.388 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.388 13:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.646 13:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:20:25.580 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.580 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.580 13:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.580 13:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.580 13:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.580 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.580 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.580 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:25.580 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:25.839 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:20:25.839 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.839 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:25.839 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:25.839 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:25.839 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.839 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.839 13:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.839 13:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.839 13:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.839 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.839 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.405 00:20:26.405 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.405 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.405 13:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.405 13:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.405 13:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.405 13:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.405 13:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.405 13:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.405 13:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.405 { 00:20:26.405 "cntlid": 9, 00:20:26.405 "qid": 0, 00:20:26.405 "state": "enabled", 00:20:26.405 "thread": "nvmf_tgt_poll_group_000", 00:20:26.405 "listen_address": { 00:20:26.405 "trtype": "TCP", 00:20:26.405 "adrfam": "IPv4", 00:20:26.405 "traddr": "10.0.0.2", 00:20:26.405 "trsvcid": "4420" 00:20:26.405 }, 00:20:26.405 "peer_address": { 00:20:26.405 "trtype": "TCP", 00:20:26.405 "adrfam": "IPv4", 00:20:26.405 "traddr": "10.0.0.1", 00:20:26.405 "trsvcid": "35670" 00:20:26.405 }, 00:20:26.405 "auth": { 00:20:26.405 "state": "completed", 00:20:26.405 "digest": "sha256", 00:20:26.405 "dhgroup": "ffdhe2048" 00:20:26.405 } 00:20:26.405 } 00:20:26.405 ]' 00:20:26.405 13:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.663 13:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.663 13:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.663 13:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:26.663 13:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.663 13:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.663 13:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.663 13:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.921 13:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:20:27.855 13:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.855 13:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.855 13:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.855 13:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.855 13:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.855 13:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.855 13:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:27.855 13:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:28.113 13:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:20:28.113 13:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.113 13:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:28.113 13:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:28.113 13:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:28.113 13:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.113 13:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.113 13:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.113 13:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.113 13:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.113 13:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.113 13:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.370 00:20:28.370 13:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.370 13:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.370 13:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.627 13:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.627 13:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.627 13:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.627 13:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.627 13:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.627 13:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.627 { 00:20:28.627 "cntlid": 11, 00:20:28.627 "qid": 0, 00:20:28.627 "state": "enabled", 00:20:28.627 "thread": "nvmf_tgt_poll_group_000", 00:20:28.627 "listen_address": { 00:20:28.627 "trtype": "TCP", 00:20:28.627 "adrfam": "IPv4", 00:20:28.627 "traddr": "10.0.0.2", 00:20:28.627 "trsvcid": "4420" 00:20:28.627 }, 00:20:28.627 "peer_address": { 00:20:28.627 "trtype": "TCP", 00:20:28.627 "adrfam": "IPv4", 00:20:28.627 "traddr": "10.0.0.1", 00:20:28.627 "trsvcid": "35702" 00:20:28.627 }, 00:20:28.627 "auth": { 00:20:28.627 "state": "completed", 00:20:28.627 "digest": "sha256", 00:20:28.627 "dhgroup": "ffdhe2048" 00:20:28.627 } 00:20:28.627 } 00:20:28.627 ]' 00:20:28.627 13:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.627 13:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.627 13:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.627 13:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:28.627 13:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.883 13:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.883 13:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.883 13:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.140 13:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:20:30.073 13:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.073 13:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.073 13:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.073 13:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.073 13:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.073 13:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.073 13:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:30.073 13:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:30.331 13:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:20:30.331 13:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.331 13:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:30.331 13:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:30.331 13:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:30.331 13:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.331 13:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.331 13:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.331 13:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.331 13:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.331 13:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.331 13:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.588 00:20:30.588 13:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.588 13:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.588 13:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.846 13:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.846 13:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.846 13:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.846 13:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.846 13:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.846 13:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.846 { 00:20:30.846 "cntlid": 13, 00:20:30.846 "qid": 0, 00:20:30.846 "state": "enabled", 00:20:30.846 "thread": "nvmf_tgt_poll_group_000", 00:20:30.846 "listen_address": { 00:20:30.846 "trtype": "TCP", 00:20:30.846 "adrfam": "IPv4", 00:20:30.846 "traddr": "10.0.0.2", 00:20:30.846 "trsvcid": "4420" 00:20:30.846 }, 00:20:30.846 "peer_address": { 00:20:30.846 "trtype": "TCP", 00:20:30.846 "adrfam": "IPv4", 00:20:30.846 "traddr": "10.0.0.1", 00:20:30.846 "trsvcid": "35724" 00:20:30.846 }, 00:20:30.846 "auth": { 00:20:30.846 "state": "completed", 00:20:30.846 "digest": "sha256", 00:20:30.846 "dhgroup": "ffdhe2048" 00:20:30.846 } 00:20:30.846 } 00:20:30.846 ]' 00:20:30.846 13:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.846 13:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.846 13:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.846 13:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:30.846 13:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.846 13:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.846 13:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.846 13:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.104 13:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:20:32.475 13:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.475 13:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.475 13:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.475 13:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.475 13:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.475 13:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.475 13:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:32.475 13:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:32.475 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:20:32.475 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.475 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:32.475 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:32.475 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:32.475 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.475 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:32.475 13:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.475 13:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.475 13:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.475 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:32.475 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:32.733 00:20:32.733 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.733 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.733 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.990 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.990 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.990 13:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.990 13:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.990 13:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.990 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.990 { 00:20:32.990 "cntlid": 15, 00:20:32.990 "qid": 0, 00:20:32.990 "state": "enabled", 00:20:32.990 "thread": "nvmf_tgt_poll_group_000", 00:20:32.990 "listen_address": { 00:20:32.990 "trtype": "TCP", 00:20:32.990 "adrfam": "IPv4", 00:20:32.990 "traddr": "10.0.0.2", 00:20:32.990 "trsvcid": "4420" 00:20:32.990 }, 00:20:32.990 "peer_address": { 00:20:32.990 "trtype": "TCP", 00:20:32.990 "adrfam": "IPv4", 00:20:32.990 "traddr": "10.0.0.1", 00:20:32.990 "trsvcid": "35750" 00:20:32.990 }, 00:20:32.990 "auth": { 00:20:32.991 "state": "completed", 00:20:32.991 "digest": "sha256", 00:20:32.991 "dhgroup": "ffdhe2048" 00:20:32.991 } 00:20:32.991 } 00:20:32.991 ]' 00:20:32.991 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.991 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.991 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.991 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:32.991 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.248 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.248 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.248 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.506 13:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:20:34.471 13:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.471 13:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.471 13:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.471 13:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.471 13:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.471 13:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.471 13:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.471 13:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:34.471 13:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:34.471 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:20:34.471 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.471 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:34.471 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:34.471 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:34.471 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.471 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.471 13:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.471 13:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.471 13:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.471 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.471 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.033 00:20:35.033 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.033 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.034 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.290 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.290 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.290 13:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.290 13:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.290 13:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.290 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.290 { 00:20:35.290 "cntlid": 17, 00:20:35.290 "qid": 0, 00:20:35.290 "state": "enabled", 00:20:35.290 "thread": "nvmf_tgt_poll_group_000", 00:20:35.290 "listen_address": { 00:20:35.290 "trtype": "TCP", 00:20:35.290 "adrfam": "IPv4", 00:20:35.290 "traddr": "10.0.0.2", 00:20:35.290 "trsvcid": "4420" 00:20:35.290 }, 00:20:35.290 "peer_address": { 00:20:35.290 "trtype": "TCP", 00:20:35.290 "adrfam": "IPv4", 00:20:35.290 "traddr": "10.0.0.1", 00:20:35.290 "trsvcid": "59692" 00:20:35.290 }, 00:20:35.290 "auth": { 00:20:35.290 "state": "completed", 00:20:35.290 "digest": "sha256", 00:20:35.290 "dhgroup": "ffdhe3072" 00:20:35.290 } 00:20:35.290 } 00:20:35.290 ]' 00:20:35.290 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.290 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.290 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.290 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:35.290 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.290 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.290 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.290 13:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.547 13:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:20:36.476 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.476 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.476 13:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.476 13:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.476 13:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.476 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.476 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:36.476 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:36.734 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:20:36.734 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.734 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:36.734 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:36.734 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:36.734 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.734 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.734 13:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.734 13:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.734 13:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.734 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.734 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.993 00:20:36.993 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.993 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.993 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.251 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.251 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.251 13:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.251 13:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.251 13:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.251 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.251 { 00:20:37.251 "cntlid": 19, 00:20:37.251 "qid": 0, 00:20:37.251 "state": "enabled", 00:20:37.251 "thread": "nvmf_tgt_poll_group_000", 00:20:37.251 "listen_address": { 00:20:37.251 "trtype": "TCP", 00:20:37.251 "adrfam": "IPv4", 00:20:37.251 "traddr": "10.0.0.2", 00:20:37.251 "trsvcid": "4420" 00:20:37.251 }, 00:20:37.251 "peer_address": { 00:20:37.251 "trtype": "TCP", 00:20:37.251 "adrfam": "IPv4", 00:20:37.251 "traddr": "10.0.0.1", 00:20:37.251 "trsvcid": "59722" 00:20:37.251 }, 00:20:37.251 "auth": { 00:20:37.251 "state": "completed", 00:20:37.251 "digest": "sha256", 00:20:37.251 "dhgroup": "ffdhe3072" 00:20:37.251 } 00:20:37.251 } 00:20:37.251 ]' 00:20:37.251 13:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.509 13:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.509 13:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.509 13:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.509 13:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.509 13:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.509 13:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.509 13:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.767 13:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:20:38.703 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.703 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.703 13:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.703 13:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.703 13:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.703 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.703 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:38.703 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:38.961 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:38.961 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.961 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:38.961 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:38.961 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:38.961 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.961 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.961 13:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.961 13:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.961 13:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.961 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.961 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.219 00:20:39.219 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.219 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.219 13:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.476 13:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.476 13:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.476 13:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.476 13:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.476 13:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.734 13:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.734 { 00:20:39.734 "cntlid": 21, 00:20:39.734 "qid": 0, 00:20:39.734 "state": "enabled", 00:20:39.734 "thread": "nvmf_tgt_poll_group_000", 00:20:39.734 "listen_address": { 00:20:39.734 "trtype": "TCP", 00:20:39.734 "adrfam": "IPv4", 00:20:39.734 "traddr": "10.0.0.2", 00:20:39.734 "trsvcid": "4420" 00:20:39.734 }, 00:20:39.734 "peer_address": { 00:20:39.734 "trtype": "TCP", 00:20:39.734 "adrfam": "IPv4", 00:20:39.734 "traddr": "10.0.0.1", 00:20:39.734 "trsvcid": "59746" 00:20:39.734 }, 00:20:39.734 "auth": { 00:20:39.734 "state": "completed", 00:20:39.734 "digest": "sha256", 00:20:39.734 "dhgroup": "ffdhe3072" 00:20:39.734 } 00:20:39.734 } 00:20:39.734 ]' 00:20:39.734 13:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.734 13:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.734 13:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.734 13:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:39.734 13:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.734 13:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.734 13:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.734 13:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.992 13:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:20:40.924 13:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.924 13:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.924 13:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.924 13:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.924 13:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.924 13:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.924 13:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:40.924 13:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:41.181 13:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:41.181 13:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.181 13:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:41.181 13:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:41.181 13:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:41.181 13:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.181 13:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:41.181 13:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.181 13:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.181 13:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.181 13:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.181 13:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.745 00:20:41.745 13:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.745 13:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.745 13:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.003 13:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.003 13:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.003 13:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.003 13:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.003 13:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.003 13:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.003 { 00:20:42.003 "cntlid": 23, 00:20:42.003 "qid": 0, 00:20:42.003 "state": "enabled", 00:20:42.003 "thread": "nvmf_tgt_poll_group_000", 00:20:42.003 "listen_address": { 00:20:42.003 "trtype": "TCP", 00:20:42.003 "adrfam": "IPv4", 00:20:42.003 "traddr": "10.0.0.2", 00:20:42.003 "trsvcid": "4420" 00:20:42.003 }, 00:20:42.003 "peer_address": { 00:20:42.003 "trtype": "TCP", 00:20:42.003 "adrfam": "IPv4", 00:20:42.003 "traddr": "10.0.0.1", 00:20:42.003 "trsvcid": "59784" 00:20:42.003 }, 00:20:42.003 "auth": { 00:20:42.003 "state": "completed", 00:20:42.003 "digest": "sha256", 00:20:42.003 "dhgroup": "ffdhe3072" 00:20:42.003 } 00:20:42.003 } 00:20:42.003 ]' 00:20:42.003 13:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.003 13:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.003 13:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.003 13:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:42.003 13:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.003 13:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.003 13:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.003 13:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.260 13:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:20:43.194 13:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.194 13:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.194 13:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.194 13:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.194 13:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.194 13:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.194 13:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.194 13:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.194 13:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.452 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:43.452 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.452 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:43.452 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:43.452 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:43.452 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.452 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.452 13:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.452 13:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.452 13:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.452 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.452 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.710 00:20:43.967 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.967 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.967 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.225 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.225 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.225 13:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.225 13:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.225 13:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.225 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.225 { 00:20:44.225 "cntlid": 25, 00:20:44.225 "qid": 0, 00:20:44.225 "state": "enabled", 00:20:44.225 "thread": "nvmf_tgt_poll_group_000", 00:20:44.225 "listen_address": { 00:20:44.225 "trtype": "TCP", 00:20:44.225 "adrfam": "IPv4", 00:20:44.225 "traddr": "10.0.0.2", 00:20:44.225 "trsvcid": "4420" 00:20:44.225 }, 00:20:44.225 "peer_address": { 00:20:44.225 "trtype": "TCP", 00:20:44.225 "adrfam": "IPv4", 00:20:44.225 "traddr": "10.0.0.1", 00:20:44.225 "trsvcid": "56332" 00:20:44.225 }, 00:20:44.225 "auth": { 00:20:44.225 "state": "completed", 00:20:44.225 "digest": "sha256", 00:20:44.225 "dhgroup": "ffdhe4096" 00:20:44.225 } 00:20:44.225 } 00:20:44.225 ]' 00:20:44.225 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.225 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.225 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.225 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:44.225 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.225 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.225 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.225 13:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.482 13:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:20:45.415 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.415 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.415 13:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.415 13:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.415 13:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.415 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.415 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:45.415 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:45.673 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:45.673 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.673 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:45.673 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:45.673 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:45.673 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.673 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.673 13:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.673 13:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.673 13:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.673 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.673 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.239 00:20:46.239 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.239 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.239 13:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.497 13:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.497 13:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.497 13:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.497 13:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.497 13:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.497 13:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.497 { 00:20:46.497 "cntlid": 27, 00:20:46.497 "qid": 0, 00:20:46.497 "state": "enabled", 00:20:46.497 "thread": "nvmf_tgt_poll_group_000", 00:20:46.497 "listen_address": { 00:20:46.497 "trtype": "TCP", 00:20:46.497 "adrfam": "IPv4", 00:20:46.497 "traddr": "10.0.0.2", 00:20:46.497 "trsvcid": "4420" 00:20:46.497 }, 00:20:46.497 "peer_address": { 00:20:46.497 "trtype": "TCP", 00:20:46.497 "adrfam": "IPv4", 00:20:46.497 "traddr": "10.0.0.1", 00:20:46.497 "trsvcid": "56362" 00:20:46.497 }, 00:20:46.497 "auth": { 00:20:46.497 "state": "completed", 00:20:46.497 "digest": "sha256", 00:20:46.497 "dhgroup": "ffdhe4096" 00:20:46.497 } 00:20:46.497 } 00:20:46.497 ]' 00:20:46.497 13:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.497 13:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.497 13:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.497 13:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:46.497 13:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.497 13:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.497 13:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.497 13:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.755 13:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:20:47.717 13:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.717 13:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.717 13:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.717 13:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.717 13:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.717 13:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.717 13:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:47.717 13:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:48.294 13:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:48.295 13:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.295 13:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:48.295 13:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:48.295 13:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:48.295 13:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.295 13:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.295 13:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.295 13:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.295 13:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.295 13:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.295 13:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.560 00:20:48.560 13:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.560 13:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.560 13:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.817 13:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.817 13:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.817 13:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.817 13:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.817 13:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.817 13:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.817 { 00:20:48.817 "cntlid": 29, 00:20:48.817 "qid": 0, 00:20:48.817 "state": "enabled", 00:20:48.817 "thread": "nvmf_tgt_poll_group_000", 00:20:48.817 "listen_address": { 00:20:48.817 "trtype": "TCP", 00:20:48.817 "adrfam": "IPv4", 00:20:48.817 "traddr": "10.0.0.2", 00:20:48.817 "trsvcid": "4420" 00:20:48.817 }, 00:20:48.817 "peer_address": { 00:20:48.817 "trtype": "TCP", 00:20:48.817 "adrfam": "IPv4", 00:20:48.817 "traddr": "10.0.0.1", 00:20:48.817 "trsvcid": "56404" 00:20:48.817 }, 00:20:48.817 "auth": { 00:20:48.817 "state": "completed", 00:20:48.817 "digest": "sha256", 00:20:48.817 "dhgroup": "ffdhe4096" 00:20:48.817 } 00:20:48.817 } 00:20:48.817 ]' 00:20:48.817 13:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.817 13:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:48.817 13:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.817 13:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:48.817 13:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.817 13:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.817 13:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.817 13:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.073 13:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:20:50.004 13:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.261 13:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.261 13:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.261 13:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.261 13:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.261 13:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.261 13:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:50.261 13:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:50.519 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:50.519 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.519 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:50.519 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:50.519 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:50.519 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.519 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:50.519 13:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.519 13:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.519 13:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.519 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:50.519 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:50.777 00:20:50.777 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.777 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.778 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.035 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.035 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.035 13:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.036 13:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.036 13:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.036 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.036 { 00:20:51.036 "cntlid": 31, 00:20:51.036 "qid": 0, 00:20:51.036 "state": "enabled", 00:20:51.036 "thread": "nvmf_tgt_poll_group_000", 00:20:51.036 "listen_address": { 00:20:51.036 "trtype": "TCP", 00:20:51.036 "adrfam": "IPv4", 00:20:51.036 "traddr": "10.0.0.2", 00:20:51.036 "trsvcid": "4420" 00:20:51.036 }, 00:20:51.036 "peer_address": { 00:20:51.036 "trtype": "TCP", 00:20:51.036 "adrfam": "IPv4", 00:20:51.036 "traddr": "10.0.0.1", 00:20:51.036 "trsvcid": "56424" 00:20:51.036 }, 00:20:51.036 "auth": { 00:20:51.036 "state": "completed", 00:20:51.036 "digest": "sha256", 00:20:51.036 "dhgroup": "ffdhe4096" 00:20:51.036 } 00:20:51.036 } 00:20:51.036 ]' 00:20:51.036 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.036 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:51.036 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.293 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:51.293 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.293 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.293 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.293 13:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.552 13:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:20:52.485 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.485 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.485 13:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.485 13:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.485 13:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.485 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.485 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.485 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:52.485 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:52.743 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:52.743 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.743 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:52.743 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:52.743 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:52.743 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.743 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.743 13:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.743 13:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.743 13:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.743 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.743 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.308 00:20:53.308 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.308 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.308 13:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.566 13:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.566 13:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.566 13:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.566 13:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.566 13:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.566 13:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.566 { 00:20:53.566 "cntlid": 33, 00:20:53.566 "qid": 0, 00:20:53.566 "state": "enabled", 00:20:53.566 "thread": "nvmf_tgt_poll_group_000", 00:20:53.566 "listen_address": { 00:20:53.566 "trtype": "TCP", 00:20:53.566 "adrfam": "IPv4", 00:20:53.566 "traddr": "10.0.0.2", 00:20:53.566 "trsvcid": "4420" 00:20:53.566 }, 00:20:53.566 "peer_address": { 00:20:53.566 "trtype": "TCP", 00:20:53.566 "adrfam": "IPv4", 00:20:53.566 "traddr": "10.0.0.1", 00:20:53.566 "trsvcid": "56452" 00:20:53.566 }, 00:20:53.566 "auth": { 00:20:53.566 "state": "completed", 00:20:53.566 "digest": "sha256", 00:20:53.566 "dhgroup": "ffdhe6144" 00:20:53.566 } 00:20:53.566 } 00:20:53.566 ]' 00:20:53.566 13:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.566 13:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:53.566 13:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.566 13:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:53.566 13:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.566 13:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.566 13:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.566 13:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.824 13:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:20:54.754 13:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.754 13:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.754 13:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.754 13:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.754 13:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.754 13:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.754 13:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:54.754 13:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:55.011 13:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:55.011 13:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.011 13:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:55.011 13:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:55.011 13:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:55.011 13:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.011 13:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.011 13:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.011 13:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.011 13:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.011 13:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.011 13:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.575 00:20:55.575 13:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.575 13:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.575 13:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.833 13:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.833 13:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.833 13:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.833 13:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.833 13:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.833 13:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.833 { 00:20:55.833 "cntlid": 35, 00:20:55.833 "qid": 0, 00:20:55.833 "state": "enabled", 00:20:55.833 "thread": "nvmf_tgt_poll_group_000", 00:20:55.833 "listen_address": { 00:20:55.833 "trtype": "TCP", 00:20:55.833 "adrfam": "IPv4", 00:20:55.833 "traddr": "10.0.0.2", 00:20:55.833 "trsvcid": "4420" 00:20:55.833 }, 00:20:55.833 "peer_address": { 00:20:55.833 "trtype": "TCP", 00:20:55.833 "adrfam": "IPv4", 00:20:55.833 "traddr": "10.0.0.1", 00:20:55.833 "trsvcid": "54672" 00:20:55.833 }, 00:20:55.833 "auth": { 00:20:55.833 "state": "completed", 00:20:55.833 "digest": "sha256", 00:20:55.833 "dhgroup": "ffdhe6144" 00:20:55.833 } 00:20:55.833 } 00:20:55.833 ]' 00:20:55.833 13:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.833 13:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:55.833 13:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.833 13:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:55.833 13:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.091 13:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.091 13:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.091 13:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.347 13:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:20:57.279 13:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.279 13:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.279 13:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.279 13:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.279 13:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.279 13:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.279 13:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:57.279 13:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:57.537 13:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:57.537 13:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.537 13:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:57.537 13:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:57.537 13:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:57.537 13:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.537 13:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.537 13:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.537 13:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.537 13:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.537 13:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.537 13:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.102 00:20:58.102 13:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.102 13:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.102 13:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.358 13:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.358 13:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.358 13:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.358 13:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.358 13:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.358 13:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.358 { 00:20:58.358 "cntlid": 37, 00:20:58.358 "qid": 0, 00:20:58.358 "state": "enabled", 00:20:58.358 "thread": "nvmf_tgt_poll_group_000", 00:20:58.359 "listen_address": { 00:20:58.359 "trtype": "TCP", 00:20:58.359 "adrfam": "IPv4", 00:20:58.359 "traddr": "10.0.0.2", 00:20:58.359 "trsvcid": "4420" 00:20:58.359 }, 00:20:58.359 "peer_address": { 00:20:58.359 "trtype": "TCP", 00:20:58.359 "adrfam": "IPv4", 00:20:58.359 "traddr": "10.0.0.1", 00:20:58.359 "trsvcid": "54710" 00:20:58.359 }, 00:20:58.359 "auth": { 00:20:58.359 "state": "completed", 00:20:58.359 "digest": "sha256", 00:20:58.359 "dhgroup": "ffdhe6144" 00:20:58.359 } 00:20:58.359 } 00:20:58.359 ]' 00:20:58.359 13:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.359 13:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:58.359 13:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.359 13:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:58.359 13:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.359 13:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.359 13:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.359 13:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.616 13:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:20:59.545 13:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.802 13:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.802 13:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.802 13:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.802 13:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.802 13:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.802 13:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:59.802 13:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:00.060 13:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:21:00.060 13:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.060 13:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:00.060 13:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:00.060 13:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:00.060 13:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.060 13:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:00.060 13:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.060 13:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.060 13:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.060 13:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.060 13:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.627 00:21:00.627 13:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.627 13:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.627 13:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.921 13:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.921 13:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.921 13:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.921 13:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.921 13:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.921 13:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.921 { 00:21:00.921 "cntlid": 39, 00:21:00.921 "qid": 0, 00:21:00.921 "state": "enabled", 00:21:00.921 "thread": "nvmf_tgt_poll_group_000", 00:21:00.921 "listen_address": { 00:21:00.921 "trtype": "TCP", 00:21:00.921 "adrfam": "IPv4", 00:21:00.921 "traddr": "10.0.0.2", 00:21:00.921 "trsvcid": "4420" 00:21:00.921 }, 00:21:00.921 "peer_address": { 00:21:00.921 "trtype": "TCP", 00:21:00.921 "adrfam": "IPv4", 00:21:00.921 "traddr": "10.0.0.1", 00:21:00.921 "trsvcid": "54722" 00:21:00.921 }, 00:21:00.921 "auth": { 00:21:00.921 "state": "completed", 00:21:00.921 "digest": "sha256", 00:21:00.921 "dhgroup": "ffdhe6144" 00:21:00.921 } 00:21:00.921 } 00:21:00.921 ]' 00:21:00.921 13:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.921 13:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:00.921 13:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.921 13:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:00.921 13:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.921 13:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.921 13:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.921 13:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.179 13:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:21:02.110 13:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.368 13:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.368 13:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.368 13:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.368 13:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.368 13:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.368 13:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.368 13:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:02.368 13:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:02.626 13:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:21:02.626 13:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.626 13:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:02.626 13:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:02.626 13:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:02.626 13:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.626 13:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.626 13:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.626 13:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.626 13:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.626 13:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.626 13:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.559 00:21:03.559 13:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.559 13:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.559 13:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.559 13:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.559 13:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.559 13:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.559 13:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.817 13:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.817 13:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.817 { 00:21:03.817 "cntlid": 41, 00:21:03.817 "qid": 0, 00:21:03.817 "state": "enabled", 00:21:03.817 "thread": "nvmf_tgt_poll_group_000", 00:21:03.817 "listen_address": { 00:21:03.817 "trtype": "TCP", 00:21:03.817 "adrfam": "IPv4", 00:21:03.817 "traddr": "10.0.0.2", 00:21:03.817 "trsvcid": "4420" 00:21:03.817 }, 00:21:03.817 "peer_address": { 00:21:03.817 "trtype": "TCP", 00:21:03.817 "adrfam": "IPv4", 00:21:03.817 "traddr": "10.0.0.1", 00:21:03.817 "trsvcid": "54748" 00:21:03.817 }, 00:21:03.817 "auth": { 00:21:03.817 "state": "completed", 00:21:03.817 "digest": "sha256", 00:21:03.817 "dhgroup": "ffdhe8192" 00:21:03.817 } 00:21:03.817 } 00:21:03.817 ]' 00:21:03.817 13:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.817 13:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:03.817 13:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.817 13:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:03.817 13:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.817 13:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.817 13:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.817 13:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.075 13:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:21:05.007 13:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.007 13:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.007 13:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.007 13:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.007 13:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.007 13:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.007 13:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:05.007 13:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:05.266 13:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:21:05.266 13:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.266 13:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:05.266 13:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:05.266 13:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:05.266 13:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.266 13:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.266 13:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.266 13:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.266 13:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.266 13:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.266 13:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.199 00:21:06.199 13:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.199 13:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.199 13:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.457 13:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.457 13:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.457 13:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.457 13:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.457 13:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.457 13:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.457 { 00:21:06.457 "cntlid": 43, 00:21:06.457 "qid": 0, 00:21:06.457 "state": "enabled", 00:21:06.457 "thread": "nvmf_tgt_poll_group_000", 00:21:06.457 "listen_address": { 00:21:06.457 "trtype": "TCP", 00:21:06.457 "adrfam": "IPv4", 00:21:06.457 "traddr": "10.0.0.2", 00:21:06.457 "trsvcid": "4420" 00:21:06.457 }, 00:21:06.457 "peer_address": { 00:21:06.457 "trtype": "TCP", 00:21:06.457 "adrfam": "IPv4", 00:21:06.457 "traddr": "10.0.0.1", 00:21:06.457 "trsvcid": "51136" 00:21:06.457 }, 00:21:06.457 "auth": { 00:21:06.457 "state": "completed", 00:21:06.457 "digest": "sha256", 00:21:06.457 "dhgroup": "ffdhe8192" 00:21:06.457 } 00:21:06.457 } 00:21:06.457 ]' 00:21:06.457 13:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.457 13:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:06.457 13:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.457 13:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:06.457 13:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.457 13:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.457 13:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.457 13:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.715 13:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:21:07.647 13:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.647 13:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.647 13:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.647 13:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.647 13:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.647 13:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.647 13:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:07.647 13:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:07.904 13:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:21:07.904 13:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.904 13:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:07.904 13:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:07.904 13:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:07.904 13:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.904 13:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.904 13:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.904 13:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.904 13:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.904 13:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.904 13:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.835 00:21:08.835 13:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.835 13:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.835 13:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.092 13:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.092 13:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.092 13:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.092 13:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.092 13:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.092 13:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.092 { 00:21:09.092 "cntlid": 45, 00:21:09.092 "qid": 0, 00:21:09.092 "state": "enabled", 00:21:09.092 "thread": "nvmf_tgt_poll_group_000", 00:21:09.092 "listen_address": { 00:21:09.092 "trtype": "TCP", 00:21:09.092 "adrfam": "IPv4", 00:21:09.092 "traddr": "10.0.0.2", 00:21:09.092 "trsvcid": "4420" 00:21:09.092 }, 00:21:09.093 "peer_address": { 00:21:09.093 "trtype": "TCP", 00:21:09.093 "adrfam": "IPv4", 00:21:09.093 "traddr": "10.0.0.1", 00:21:09.093 "trsvcid": "51178" 00:21:09.093 }, 00:21:09.093 "auth": { 00:21:09.093 "state": "completed", 00:21:09.093 "digest": "sha256", 00:21:09.093 "dhgroup": "ffdhe8192" 00:21:09.093 } 00:21:09.093 } 00:21:09.093 ]' 00:21:09.093 13:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.093 13:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:09.093 13:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.350 13:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:09.350 13:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.350 13:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.350 13:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.350 13:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.609 13:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:21:10.543 13:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.543 13:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.543 13:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.543 13:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.543 13:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.543 13:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.543 13:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:10.543 13:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:10.801 13:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:21:10.801 13:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.801 13:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:10.801 13:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:10.801 13:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:10.801 13:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.801 13:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:10.801 13:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.801 13:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.801 13:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.801 13:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.801 13:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.735 00:21:11.735 13:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.735 13:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.735 13:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.993 13:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.993 13:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.993 13:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.993 13:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.993 13:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.993 13:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.993 { 00:21:11.993 "cntlid": 47, 00:21:11.993 "qid": 0, 00:21:11.993 "state": "enabled", 00:21:11.993 "thread": "nvmf_tgt_poll_group_000", 00:21:11.993 "listen_address": { 00:21:11.993 "trtype": "TCP", 00:21:11.993 "adrfam": "IPv4", 00:21:11.993 "traddr": "10.0.0.2", 00:21:11.993 "trsvcid": "4420" 00:21:11.993 }, 00:21:11.993 "peer_address": { 00:21:11.993 "trtype": "TCP", 00:21:11.993 "adrfam": "IPv4", 00:21:11.993 "traddr": "10.0.0.1", 00:21:11.993 "trsvcid": "51208" 00:21:11.993 }, 00:21:11.993 "auth": { 00:21:11.993 "state": "completed", 00:21:11.993 "digest": "sha256", 00:21:11.993 "dhgroup": "ffdhe8192" 00:21:11.993 } 00:21:11.993 } 00:21:11.993 ]' 00:21:11.993 13:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.993 13:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:11.993 13:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.993 13:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:11.993 13:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.993 13:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.993 13:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.993 13:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.251 13:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:21:13.182 13:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.439 13:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.439 13:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.439 13:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.439 13:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.439 13:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:13.439 13:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.439 13:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.439 13:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:13.439 13:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:13.697 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:21:13.697 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.697 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:13.697 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:13.697 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:13.697 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.697 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.697 13:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.697 13:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.697 13:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.697 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.697 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.955 00:21:13.955 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.955 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.955 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.213 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.213 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.213 13:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.213 13:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.213 13:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.213 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.213 { 00:21:14.213 "cntlid": 49, 00:21:14.213 "qid": 0, 00:21:14.213 "state": "enabled", 00:21:14.213 "thread": "nvmf_tgt_poll_group_000", 00:21:14.213 "listen_address": { 00:21:14.213 "trtype": "TCP", 00:21:14.213 "adrfam": "IPv4", 00:21:14.213 "traddr": "10.0.0.2", 00:21:14.213 "trsvcid": "4420" 00:21:14.213 }, 00:21:14.213 "peer_address": { 00:21:14.213 "trtype": "TCP", 00:21:14.213 "adrfam": "IPv4", 00:21:14.213 "traddr": "10.0.0.1", 00:21:14.213 "trsvcid": "55304" 00:21:14.213 }, 00:21:14.213 "auth": { 00:21:14.213 "state": "completed", 00:21:14.213 "digest": "sha384", 00:21:14.213 "dhgroup": "null" 00:21:14.213 } 00:21:14.213 } 00:21:14.213 ]' 00:21:14.213 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.213 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.213 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.213 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:14.213 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.213 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.213 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.213 13:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.472 13:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:21:15.453 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.453 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.453 13:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.453 13:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.453 13:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.453 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.453 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:15.453 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:15.711 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:21:15.711 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.711 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:15.711 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:15.711 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:15.711 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.711 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.711 13:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.711 13:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.711 13:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.711 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.711 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.969 00:21:15.969 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.969 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.969 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.226 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.226 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.226 13:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.226 13:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.227 13:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.227 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.227 { 00:21:16.227 "cntlid": 51, 00:21:16.227 "qid": 0, 00:21:16.227 "state": "enabled", 00:21:16.227 "thread": "nvmf_tgt_poll_group_000", 00:21:16.227 "listen_address": { 00:21:16.227 "trtype": "TCP", 00:21:16.227 "adrfam": "IPv4", 00:21:16.227 "traddr": "10.0.0.2", 00:21:16.227 "trsvcid": "4420" 00:21:16.227 }, 00:21:16.227 "peer_address": { 00:21:16.227 "trtype": "TCP", 00:21:16.227 "adrfam": "IPv4", 00:21:16.227 "traddr": "10.0.0.1", 00:21:16.227 "trsvcid": "55334" 00:21:16.227 }, 00:21:16.227 "auth": { 00:21:16.227 "state": "completed", 00:21:16.227 "digest": "sha384", 00:21:16.227 "dhgroup": "null" 00:21:16.227 } 00:21:16.227 } 00:21:16.227 ]' 00:21:16.227 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.227 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.227 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.484 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:16.484 13:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.484 13:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.484 13:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.484 13:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.742 13:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:21:17.674 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.674 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.674 13:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.674 13:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.674 13:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.674 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.674 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:17.674 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:17.932 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:21:17.932 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.932 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:17.932 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:17.932 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:17.932 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.932 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.932 13:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.932 13:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.932 13:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.932 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.932 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.190 00:21:18.190 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.190 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.190 13:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.447 13:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.447 13:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.447 13:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.447 13:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.447 13:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.447 13:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.447 { 00:21:18.447 "cntlid": 53, 00:21:18.447 "qid": 0, 00:21:18.447 "state": "enabled", 00:21:18.447 "thread": "nvmf_tgt_poll_group_000", 00:21:18.447 "listen_address": { 00:21:18.447 "trtype": "TCP", 00:21:18.447 "adrfam": "IPv4", 00:21:18.447 "traddr": "10.0.0.2", 00:21:18.447 "trsvcid": "4420" 00:21:18.447 }, 00:21:18.447 "peer_address": { 00:21:18.447 "trtype": "TCP", 00:21:18.447 "adrfam": "IPv4", 00:21:18.447 "traddr": "10.0.0.1", 00:21:18.447 "trsvcid": "55354" 00:21:18.447 }, 00:21:18.447 "auth": { 00:21:18.447 "state": "completed", 00:21:18.447 "digest": "sha384", 00:21:18.447 "dhgroup": "null" 00:21:18.447 } 00:21:18.447 } 00:21:18.447 ]' 00:21:18.447 13:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.705 13:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.705 13:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.705 13:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:18.705 13:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.705 13:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.705 13:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.705 13:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.962 13:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:21:19.891 13:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.891 13:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.891 13:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.891 13:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.891 13:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.891 13:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.891 13:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:19.891 13:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:20.148 13:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:21:20.148 13:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.148 13:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:20.148 13:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:20.148 13:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:20.148 13:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.148 13:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:20.148 13:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.148 13:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.148 13:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.148 13:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.148 13:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.712 00:21:20.712 13:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.712 13:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.712 13:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.712 13:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.712 13:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.712 13:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.712 13:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.712 13:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.712 13:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.712 { 00:21:20.712 "cntlid": 55, 00:21:20.712 "qid": 0, 00:21:20.712 "state": "enabled", 00:21:20.712 "thread": "nvmf_tgt_poll_group_000", 00:21:20.712 "listen_address": { 00:21:20.712 "trtype": "TCP", 00:21:20.712 "adrfam": "IPv4", 00:21:20.712 "traddr": "10.0.0.2", 00:21:20.712 "trsvcid": "4420" 00:21:20.712 }, 00:21:20.712 "peer_address": { 00:21:20.712 "trtype": "TCP", 00:21:20.712 "adrfam": "IPv4", 00:21:20.712 "traddr": "10.0.0.1", 00:21:20.712 "trsvcid": "55386" 00:21:20.712 }, 00:21:20.712 "auth": { 00:21:20.712 "state": "completed", 00:21:20.712 "digest": "sha384", 00:21:20.712 "dhgroup": "null" 00:21:20.712 } 00:21:20.712 } 00:21:20.712 ]' 00:21:20.712 13:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.992 13:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.992 13:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.992 13:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:20.992 13:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.992 13:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.992 13:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.992 13:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.249 13:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:21:22.182 13:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.182 13:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.182 13:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.182 13:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.182 13:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.182 13:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.182 13:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.182 13:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:22.182 13:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:22.439 13:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:21:22.439 13:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.439 13:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:22.439 13:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:22.439 13:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:22.439 13:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.440 13:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.440 13:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.440 13:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.440 13:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.440 13:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.440 13:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.698 00:21:22.698 13:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.698 13:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.698 13:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.955 13:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.955 13:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.955 13:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.955 13:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.955 13:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.955 13:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.955 { 00:21:22.955 "cntlid": 57, 00:21:22.955 "qid": 0, 00:21:22.955 "state": "enabled", 00:21:22.955 "thread": "nvmf_tgt_poll_group_000", 00:21:22.955 "listen_address": { 00:21:22.955 "trtype": "TCP", 00:21:22.955 "adrfam": "IPv4", 00:21:22.955 "traddr": "10.0.0.2", 00:21:22.955 "trsvcid": "4420" 00:21:22.955 }, 00:21:22.955 "peer_address": { 00:21:22.955 "trtype": "TCP", 00:21:22.955 "adrfam": "IPv4", 00:21:22.955 "traddr": "10.0.0.1", 00:21:22.955 "trsvcid": "55414" 00:21:22.955 }, 00:21:22.955 "auth": { 00:21:22.955 "state": "completed", 00:21:22.955 "digest": "sha384", 00:21:22.955 "dhgroup": "ffdhe2048" 00:21:22.955 } 00:21:22.955 } 00:21:22.955 ]' 00:21:22.955 13:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.955 13:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.955 13:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.955 13:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:22.955 13:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.213 13:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.213 13:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.213 13:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.470 13:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:21:24.403 13:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.404 13:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.404 13:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.404 13:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.404 13:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.404 13:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.404 13:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:24.404 13:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:24.662 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:21:24.662 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.662 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:24.662 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:24.662 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:24.662 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.662 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.662 13:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.662 13:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.662 13:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.662 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.662 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.920 00:21:24.920 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.920 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.920 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:25.178 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.178 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.178 13:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.178 13:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.178 13:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.178 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:25.178 { 00:21:25.178 "cntlid": 59, 00:21:25.178 "qid": 0, 00:21:25.178 "state": "enabled", 00:21:25.178 "thread": "nvmf_tgt_poll_group_000", 00:21:25.178 "listen_address": { 00:21:25.178 "trtype": "TCP", 00:21:25.178 "adrfam": "IPv4", 00:21:25.178 "traddr": "10.0.0.2", 00:21:25.178 "trsvcid": "4420" 00:21:25.178 }, 00:21:25.178 "peer_address": { 00:21:25.178 "trtype": "TCP", 00:21:25.178 "adrfam": "IPv4", 00:21:25.178 "traddr": "10.0.0.1", 00:21:25.178 "trsvcid": "50240" 00:21:25.178 }, 00:21:25.178 "auth": { 00:21:25.178 "state": "completed", 00:21:25.178 "digest": "sha384", 00:21:25.178 "dhgroup": "ffdhe2048" 00:21:25.178 } 00:21:25.178 } 00:21:25.178 ]' 00:21:25.178 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.179 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:25.179 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.179 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:25.179 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.179 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.179 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.179 13:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.437 13:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:21:26.370 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.370 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.370 13:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.370 13:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.370 13:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.370 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.370 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:26.370 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:26.628 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:21:26.628 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.628 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:26.628 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:26.628 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:26.628 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.628 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.628 13:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.628 13:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.886 13:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.886 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.886 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.144 00:21:27.144 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.144 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.144 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.402 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.402 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.402 13:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.402 13:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.402 13:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.402 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.402 { 00:21:27.402 "cntlid": 61, 00:21:27.402 "qid": 0, 00:21:27.402 "state": "enabled", 00:21:27.402 "thread": "nvmf_tgt_poll_group_000", 00:21:27.402 "listen_address": { 00:21:27.402 "trtype": "TCP", 00:21:27.402 "adrfam": "IPv4", 00:21:27.402 "traddr": "10.0.0.2", 00:21:27.402 "trsvcid": "4420" 00:21:27.402 }, 00:21:27.402 "peer_address": { 00:21:27.402 "trtype": "TCP", 00:21:27.402 "adrfam": "IPv4", 00:21:27.402 "traddr": "10.0.0.1", 00:21:27.402 "trsvcid": "50262" 00:21:27.402 }, 00:21:27.402 "auth": { 00:21:27.402 "state": "completed", 00:21:27.402 "digest": "sha384", 00:21:27.402 "dhgroup": "ffdhe2048" 00:21:27.402 } 00:21:27.402 } 00:21:27.402 ]' 00:21:27.402 13:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.402 13:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.402 13:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.402 13:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:27.402 13:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.402 13:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.402 13:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.402 13:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.661 13:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:21:28.631 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.631 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.631 13:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.631 13:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.631 13:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.632 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.632 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:28.632 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:28.896 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:21:28.896 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.896 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:28.896 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:28.896 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:28.896 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.896 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:28.896 13:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.896 13:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.896 13:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.896 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:28.896 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:29.460 00:21:29.460 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.460 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.460 13:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.460 13:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.460 13:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.460 13:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.460 13:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.460 13:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.460 13:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.460 { 00:21:29.460 "cntlid": 63, 00:21:29.460 "qid": 0, 00:21:29.460 "state": "enabled", 00:21:29.460 "thread": "nvmf_tgt_poll_group_000", 00:21:29.460 "listen_address": { 00:21:29.460 "trtype": "TCP", 00:21:29.460 "adrfam": "IPv4", 00:21:29.460 "traddr": "10.0.0.2", 00:21:29.460 "trsvcid": "4420" 00:21:29.460 }, 00:21:29.460 "peer_address": { 00:21:29.460 "trtype": "TCP", 00:21:29.460 "adrfam": "IPv4", 00:21:29.460 "traddr": "10.0.0.1", 00:21:29.460 "trsvcid": "50286" 00:21:29.460 }, 00:21:29.460 "auth": { 00:21:29.460 "state": "completed", 00:21:29.460 "digest": "sha384", 00:21:29.460 "dhgroup": "ffdhe2048" 00:21:29.460 } 00:21:29.460 } 00:21:29.460 ]' 00:21:29.460 13:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.717 13:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.717 13:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.717 13:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:29.717 13:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.717 13:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.717 13:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.717 13:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.974 13:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:21:30.906 13:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.906 13:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.906 13:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.906 13:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.906 13:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.906 13:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.906 13:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.906 13:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:30.906 13:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:31.164 13:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:21:31.164 13:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.164 13:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:31.164 13:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:31.164 13:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:31.164 13:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.164 13:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.164 13:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.164 13:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.164 13:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.164 13:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.164 13:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.422 00:21:31.422 13:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.422 13:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.422 13:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.680 13:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.680 13:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.680 13:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.680 13:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.680 13:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.680 13:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.680 { 00:21:31.680 "cntlid": 65, 00:21:31.680 "qid": 0, 00:21:31.680 "state": "enabled", 00:21:31.680 "thread": "nvmf_tgt_poll_group_000", 00:21:31.680 "listen_address": { 00:21:31.680 "trtype": "TCP", 00:21:31.680 "adrfam": "IPv4", 00:21:31.680 "traddr": "10.0.0.2", 00:21:31.680 "trsvcid": "4420" 00:21:31.680 }, 00:21:31.680 "peer_address": { 00:21:31.680 "trtype": "TCP", 00:21:31.680 "adrfam": "IPv4", 00:21:31.680 "traddr": "10.0.0.1", 00:21:31.680 "trsvcid": "50316" 00:21:31.680 }, 00:21:31.680 "auth": { 00:21:31.680 "state": "completed", 00:21:31.680 "digest": "sha384", 00:21:31.680 "dhgroup": "ffdhe3072" 00:21:31.680 } 00:21:31.680 } 00:21:31.680 ]' 00:21:31.680 13:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.938 13:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.938 13:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.938 13:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:31.938 13:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.938 13:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.938 13:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.938 13:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.197 13:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:21:33.128 13:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.128 13:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.128 13:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.128 13:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.128 13:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.128 13:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.128 13:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:33.128 13:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:33.386 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:21:33.386 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.386 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:33.386 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:33.386 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:33.386 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.386 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.386 13:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.386 13:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.386 13:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.386 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.386 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.950 00:21:33.950 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.950 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.950 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.950 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.950 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.950 13:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.950 13:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.950 13:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.951 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.951 { 00:21:33.951 "cntlid": 67, 00:21:33.951 "qid": 0, 00:21:33.951 "state": "enabled", 00:21:33.951 "thread": "nvmf_tgt_poll_group_000", 00:21:33.951 "listen_address": { 00:21:33.951 "trtype": "TCP", 00:21:33.951 "adrfam": "IPv4", 00:21:33.951 "traddr": "10.0.0.2", 00:21:33.951 "trsvcid": "4420" 00:21:33.951 }, 00:21:33.951 "peer_address": { 00:21:33.951 "trtype": "TCP", 00:21:33.951 "adrfam": "IPv4", 00:21:33.951 "traddr": "10.0.0.1", 00:21:33.951 "trsvcid": "40092" 00:21:33.951 }, 00:21:33.951 "auth": { 00:21:33.951 "state": "completed", 00:21:33.951 "digest": "sha384", 00:21:33.951 "dhgroup": "ffdhe3072" 00:21:33.951 } 00:21:33.951 } 00:21:33.951 ]' 00:21:33.951 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.208 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:34.208 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.208 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:34.208 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.208 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.208 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.208 13:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.465 13:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:21:35.395 13:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.395 13:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.395 13:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.395 13:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.395 13:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.395 13:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.395 13:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:35.395 13:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:35.651 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:21:35.651 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.651 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:35.651 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:35.651 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:35.651 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.652 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.652 13:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.652 13:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.652 13:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.652 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.652 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.908 00:21:35.908 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.908 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.908 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.165 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.165 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.165 13:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.165 13:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.165 13:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.165 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.165 { 00:21:36.165 "cntlid": 69, 00:21:36.165 "qid": 0, 00:21:36.165 "state": "enabled", 00:21:36.165 "thread": "nvmf_tgt_poll_group_000", 00:21:36.165 "listen_address": { 00:21:36.166 "trtype": "TCP", 00:21:36.166 "adrfam": "IPv4", 00:21:36.166 "traddr": "10.0.0.2", 00:21:36.166 "trsvcid": "4420" 00:21:36.166 }, 00:21:36.166 "peer_address": { 00:21:36.166 "trtype": "TCP", 00:21:36.166 "adrfam": "IPv4", 00:21:36.166 "traddr": "10.0.0.1", 00:21:36.166 "trsvcid": "40124" 00:21:36.166 }, 00:21:36.166 "auth": { 00:21:36.166 "state": "completed", 00:21:36.166 "digest": "sha384", 00:21:36.166 "dhgroup": "ffdhe3072" 00:21:36.166 } 00:21:36.166 } 00:21:36.166 ]' 00:21:36.166 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.166 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.166 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.423 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:36.423 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.423 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.423 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.423 13:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.680 13:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:21:37.613 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.613 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.613 13:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.613 13:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.613 13:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.613 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.613 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:37.613 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:37.871 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:21:37.871 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.871 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:37.871 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:37.871 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:37.871 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.871 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:37.871 13:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.871 13:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 13:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.871 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:37.871 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:38.129 00:21:38.129 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.129 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.129 13:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.386 13:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.386 13:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.386 13:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.386 13:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.386 13:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.387 13:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.387 { 00:21:38.387 "cntlid": 71, 00:21:38.387 "qid": 0, 00:21:38.387 "state": "enabled", 00:21:38.387 "thread": "nvmf_tgt_poll_group_000", 00:21:38.387 "listen_address": { 00:21:38.387 "trtype": "TCP", 00:21:38.387 "adrfam": "IPv4", 00:21:38.387 "traddr": "10.0.0.2", 00:21:38.387 "trsvcid": "4420" 00:21:38.387 }, 00:21:38.387 "peer_address": { 00:21:38.387 "trtype": "TCP", 00:21:38.387 "adrfam": "IPv4", 00:21:38.387 "traddr": "10.0.0.1", 00:21:38.387 "trsvcid": "40148" 00:21:38.387 }, 00:21:38.387 "auth": { 00:21:38.387 "state": "completed", 00:21:38.387 "digest": "sha384", 00:21:38.387 "dhgroup": "ffdhe3072" 00:21:38.387 } 00:21:38.387 } 00:21:38.387 ]' 00:21:38.387 13:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.387 13:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.387 13:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.387 13:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:38.387 13:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.644 13:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.644 13:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.644 13:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.644 13:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:21:39.579 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.579 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.579 13:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.579 13:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.579 13:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.579 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:39.579 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.579 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:39.579 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:39.841 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:39.841 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.841 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:39.841 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:39.841 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:39.841 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.841 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.841 13:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.841 13:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.841 13:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.841 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.841 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.406 00:21:40.406 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.406 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.406 13:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.664 13:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.664 13:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.664 13:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.664 13:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.664 13:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.664 13:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.664 { 00:21:40.664 "cntlid": 73, 00:21:40.664 "qid": 0, 00:21:40.664 "state": "enabled", 00:21:40.664 "thread": "nvmf_tgt_poll_group_000", 00:21:40.664 "listen_address": { 00:21:40.664 "trtype": "TCP", 00:21:40.664 "adrfam": "IPv4", 00:21:40.664 "traddr": "10.0.0.2", 00:21:40.664 "trsvcid": "4420" 00:21:40.664 }, 00:21:40.664 "peer_address": { 00:21:40.664 "trtype": "TCP", 00:21:40.664 "adrfam": "IPv4", 00:21:40.664 "traddr": "10.0.0.1", 00:21:40.664 "trsvcid": "40172" 00:21:40.664 }, 00:21:40.664 "auth": { 00:21:40.664 "state": "completed", 00:21:40.664 "digest": "sha384", 00:21:40.664 "dhgroup": "ffdhe4096" 00:21:40.664 } 00:21:40.664 } 00:21:40.664 ]' 00:21:40.664 13:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.664 13:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.664 13:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.664 13:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:40.664 13:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.664 13:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.664 13:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.664 13:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.922 13:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:21:41.855 13:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.855 13:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.855 13:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.855 13:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.855 13:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.855 13:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.855 13:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:41.855 13:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:42.113 13:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:42.113 13:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.113 13:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:42.114 13:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:42.114 13:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:42.114 13:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.114 13:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.114 13:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.114 13:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.114 13:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.114 13:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.114 13:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.712 00:21:42.712 13:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.712 13:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.712 13:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.712 13:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.712 13:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.712 13:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.712 13:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.712 13:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.712 13:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.712 { 00:21:42.712 "cntlid": 75, 00:21:42.712 "qid": 0, 00:21:42.712 "state": "enabled", 00:21:42.712 "thread": "nvmf_tgt_poll_group_000", 00:21:42.712 "listen_address": { 00:21:42.712 "trtype": "TCP", 00:21:42.712 "adrfam": "IPv4", 00:21:42.712 "traddr": "10.0.0.2", 00:21:42.712 "trsvcid": "4420" 00:21:42.712 }, 00:21:42.712 "peer_address": { 00:21:42.712 "trtype": "TCP", 00:21:42.713 "adrfam": "IPv4", 00:21:42.713 "traddr": "10.0.0.1", 00:21:42.713 "trsvcid": "40202" 00:21:42.713 }, 00:21:42.713 "auth": { 00:21:42.713 "state": "completed", 00:21:42.713 "digest": "sha384", 00:21:42.713 "dhgroup": "ffdhe4096" 00:21:42.713 } 00:21:42.713 } 00:21:42.713 ]' 00:21:42.713 13:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.969 13:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:42.970 13:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.970 13:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:42.970 13:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.970 13:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.970 13:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.970 13:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.227 13:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:21:44.158 13:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.158 13:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.158 13:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.158 13:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.158 13:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.158 13:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.158 13:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:44.158 13:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:44.415 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:44.415 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.415 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:44.415 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:44.415 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:44.415 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.415 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.415 13:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.415 13:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.415 13:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.415 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.415 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.979 00:21:44.980 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.980 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.980 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.980 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.980 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.980 13:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.980 13:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.980 13:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.980 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.980 { 00:21:44.980 "cntlid": 77, 00:21:44.980 "qid": 0, 00:21:44.980 "state": "enabled", 00:21:44.980 "thread": "nvmf_tgt_poll_group_000", 00:21:44.980 "listen_address": { 00:21:44.980 "trtype": "TCP", 00:21:44.980 "adrfam": "IPv4", 00:21:44.980 "traddr": "10.0.0.2", 00:21:44.980 "trsvcid": "4420" 00:21:44.980 }, 00:21:44.980 "peer_address": { 00:21:44.980 "trtype": "TCP", 00:21:44.980 "adrfam": "IPv4", 00:21:44.980 "traddr": "10.0.0.1", 00:21:44.980 "trsvcid": "46528" 00:21:44.980 }, 00:21:44.980 "auth": { 00:21:44.980 "state": "completed", 00:21:44.980 "digest": "sha384", 00:21:44.980 "dhgroup": "ffdhe4096" 00:21:44.980 } 00:21:44.980 } 00:21:44.980 ]' 00:21:44.980 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.237 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:45.237 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.237 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:45.237 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.237 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.237 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.237 13:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.494 13:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:21:46.425 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.425 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.425 13:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.425 13:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.425 13:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.425 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.425 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:46.425 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:46.683 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:46.683 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.683 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:46.683 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:46.683 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:46.683 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.683 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:46.683 13:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.683 13:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.683 13:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.683 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.683 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.941 00:21:47.199 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.199 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.199 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.457 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.457 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.457 13:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.457 13:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.457 13:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.457 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.457 { 00:21:47.457 "cntlid": 79, 00:21:47.457 "qid": 0, 00:21:47.457 "state": "enabled", 00:21:47.457 "thread": "nvmf_tgt_poll_group_000", 00:21:47.457 "listen_address": { 00:21:47.457 "trtype": "TCP", 00:21:47.457 "adrfam": "IPv4", 00:21:47.457 "traddr": "10.0.0.2", 00:21:47.457 "trsvcid": "4420" 00:21:47.457 }, 00:21:47.457 "peer_address": { 00:21:47.457 "trtype": "TCP", 00:21:47.457 "adrfam": "IPv4", 00:21:47.457 "traddr": "10.0.0.1", 00:21:47.457 "trsvcid": "46556" 00:21:47.457 }, 00:21:47.457 "auth": { 00:21:47.457 "state": "completed", 00:21:47.457 "digest": "sha384", 00:21:47.457 "dhgroup": "ffdhe4096" 00:21:47.457 } 00:21:47.457 } 00:21:47.457 ]' 00:21:47.457 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.457 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:47.457 13:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.457 13:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:47.457 13:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.457 13:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.457 13:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.457 13:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.714 13:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:21:48.648 13:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.648 13:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.648 13:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.648 13:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.648 13:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.648 13:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.648 13:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.648 13:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:48.648 13:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:48.905 13:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:48.905 13:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.905 13:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:48.905 13:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:48.905 13:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:48.905 13:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.905 13:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.905 13:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.905 13:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.905 13:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.905 13:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.905 13:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.470 00:21:49.470 13:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.470 13:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.470 13:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.727 13:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.727 13:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.727 13:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.727 13:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.727 13:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.727 13:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.727 { 00:21:49.727 "cntlid": 81, 00:21:49.727 "qid": 0, 00:21:49.727 "state": "enabled", 00:21:49.727 "thread": "nvmf_tgt_poll_group_000", 00:21:49.727 "listen_address": { 00:21:49.727 "trtype": "TCP", 00:21:49.727 "adrfam": "IPv4", 00:21:49.727 "traddr": "10.0.0.2", 00:21:49.727 "trsvcid": "4420" 00:21:49.727 }, 00:21:49.727 "peer_address": { 00:21:49.727 "trtype": "TCP", 00:21:49.727 "adrfam": "IPv4", 00:21:49.727 "traddr": "10.0.0.1", 00:21:49.727 "trsvcid": "46578" 00:21:49.727 }, 00:21:49.727 "auth": { 00:21:49.727 "state": "completed", 00:21:49.727 "digest": "sha384", 00:21:49.727 "dhgroup": "ffdhe6144" 00:21:49.727 } 00:21:49.728 } 00:21:49.728 ]' 00:21:49.728 13:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.728 13:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:49.728 13:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.728 13:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:49.728 13:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.728 13:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.728 13:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.728 13:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.985 13:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.362 13:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.928 00:21:51.928 13:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.928 13:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.928 13:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.186 13:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.186 13:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.186 13:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.186 13:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.186 13:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.186 13:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.186 { 00:21:52.186 "cntlid": 83, 00:21:52.186 "qid": 0, 00:21:52.186 "state": "enabled", 00:21:52.186 "thread": "nvmf_tgt_poll_group_000", 00:21:52.186 "listen_address": { 00:21:52.186 "trtype": "TCP", 00:21:52.186 "adrfam": "IPv4", 00:21:52.186 "traddr": "10.0.0.2", 00:21:52.186 "trsvcid": "4420" 00:21:52.186 }, 00:21:52.186 "peer_address": { 00:21:52.186 "trtype": "TCP", 00:21:52.186 "adrfam": "IPv4", 00:21:52.186 "traddr": "10.0.0.1", 00:21:52.186 "trsvcid": "46598" 00:21:52.186 }, 00:21:52.186 "auth": { 00:21:52.186 "state": "completed", 00:21:52.186 "digest": "sha384", 00:21:52.186 "dhgroup": "ffdhe6144" 00:21:52.186 } 00:21:52.186 } 00:21:52.186 ]' 00:21:52.186 13:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.186 13:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:52.186 13:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.186 13:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:52.186 13:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.186 13:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.187 13:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.187 13:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.445 13:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:21:53.379 13:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.637 13:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.637 13:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.637 13:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.637 13:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.637 13:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.637 13:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:53.637 13:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:53.895 13:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:53.895 13:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.895 13:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:53.895 13:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:53.895 13:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:53.895 13:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.895 13:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.895 13:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.895 13:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.895 13:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.895 13:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.895 13:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.461 00:21:54.461 13:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.461 13:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.462 13:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.720 13:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.720 13:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.720 13:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.720 13:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.720 13:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.720 13:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.720 { 00:21:54.720 "cntlid": 85, 00:21:54.720 "qid": 0, 00:21:54.720 "state": "enabled", 00:21:54.720 "thread": "nvmf_tgt_poll_group_000", 00:21:54.720 "listen_address": { 00:21:54.720 "trtype": "TCP", 00:21:54.720 "adrfam": "IPv4", 00:21:54.720 "traddr": "10.0.0.2", 00:21:54.720 "trsvcid": "4420" 00:21:54.720 }, 00:21:54.720 "peer_address": { 00:21:54.720 "trtype": "TCP", 00:21:54.720 "adrfam": "IPv4", 00:21:54.720 "traddr": "10.0.0.1", 00:21:54.720 "trsvcid": "54830" 00:21:54.720 }, 00:21:54.720 "auth": { 00:21:54.720 "state": "completed", 00:21:54.720 "digest": "sha384", 00:21:54.720 "dhgroup": "ffdhe6144" 00:21:54.720 } 00:21:54.720 } 00:21:54.720 ]' 00:21:54.720 13:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.720 13:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:54.720 13:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.720 13:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:54.720 13:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.720 13:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.720 13:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.720 13:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.978 13:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:21:55.913 13:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.913 13:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.913 13:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.913 13:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.913 13:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.913 13:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.913 13:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:55.913 13:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:56.536 13:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:56.536 13:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.536 13:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:56.536 13:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:56.536 13:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:56.536 13:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.536 13:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:56.536 13:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.536 13:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.536 13:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.536 13:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.536 13:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.794 00:21:56.794 13:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.794 13:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.794 13:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.050 13:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.050 13:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.050 13:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.050 13:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.050 13:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.050 13:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.050 { 00:21:57.050 "cntlid": 87, 00:21:57.050 "qid": 0, 00:21:57.050 "state": "enabled", 00:21:57.050 "thread": "nvmf_tgt_poll_group_000", 00:21:57.050 "listen_address": { 00:21:57.050 "trtype": "TCP", 00:21:57.050 "adrfam": "IPv4", 00:21:57.050 "traddr": "10.0.0.2", 00:21:57.050 "trsvcid": "4420" 00:21:57.050 }, 00:21:57.050 "peer_address": { 00:21:57.050 "trtype": "TCP", 00:21:57.050 "adrfam": "IPv4", 00:21:57.050 "traddr": "10.0.0.1", 00:21:57.050 "trsvcid": "54860" 00:21:57.050 }, 00:21:57.051 "auth": { 00:21:57.051 "state": "completed", 00:21:57.051 "digest": "sha384", 00:21:57.051 "dhgroup": "ffdhe6144" 00:21:57.051 } 00:21:57.051 } 00:21:57.051 ]' 00:21:57.051 13:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.308 13:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:57.308 13:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.308 13:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:57.308 13:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.308 13:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.308 13:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.308 13:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.565 13:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:21:58.496 13:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.496 13:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.496 13:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.496 13:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.496 13:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.496 13:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:58.496 13:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.496 13:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:58.496 13:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:58.753 13:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:58.753 13:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.753 13:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:58.753 13:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:58.753 13:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:58.753 13:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.753 13:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.753 13:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.753 13:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.753 13:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.753 13:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.753 13:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.684 00:21:59.684 13:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.684 13:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.684 13:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.942 13:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.942 13:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.942 13:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.942 13:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.942 13:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.942 13:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.942 { 00:21:59.942 "cntlid": 89, 00:21:59.942 "qid": 0, 00:21:59.942 "state": "enabled", 00:21:59.942 "thread": "nvmf_tgt_poll_group_000", 00:21:59.942 "listen_address": { 00:21:59.942 "trtype": "TCP", 00:21:59.942 "adrfam": "IPv4", 00:21:59.942 "traddr": "10.0.0.2", 00:21:59.942 "trsvcid": "4420" 00:21:59.942 }, 00:21:59.942 "peer_address": { 00:21:59.942 "trtype": "TCP", 00:21:59.942 "adrfam": "IPv4", 00:21:59.942 "traddr": "10.0.0.1", 00:21:59.942 "trsvcid": "54904" 00:21:59.942 }, 00:21:59.942 "auth": { 00:21:59.942 "state": "completed", 00:21:59.942 "digest": "sha384", 00:21:59.942 "dhgroup": "ffdhe8192" 00:21:59.942 } 00:21:59.942 } 00:21:59.942 ]' 00:21:59.942 13:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.942 13:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:59.942 13:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.942 13:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:59.942 13:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.942 13:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.942 13:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.942 13:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.200 13:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:22:01.573 13:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.573 13:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.573 13:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.573 13:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.573 13:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.573 13:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.573 13:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:01.573 13:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:01.573 13:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:22:01.573 13:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.573 13:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:01.573 13:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:01.573 13:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:01.573 13:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.573 13:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.573 13:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.573 13:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.573 13:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.573 13:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.573 13:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.507 00:22:02.507 13:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.507 13:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.507 13:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.765 13:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.765 13:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.765 13:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.765 13:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.765 13:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.765 13:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.765 { 00:22:02.765 "cntlid": 91, 00:22:02.765 "qid": 0, 00:22:02.765 "state": "enabled", 00:22:02.765 "thread": "nvmf_tgt_poll_group_000", 00:22:02.765 "listen_address": { 00:22:02.765 "trtype": "TCP", 00:22:02.765 "adrfam": "IPv4", 00:22:02.765 "traddr": "10.0.0.2", 00:22:02.765 "trsvcid": "4420" 00:22:02.765 }, 00:22:02.765 "peer_address": { 00:22:02.765 "trtype": "TCP", 00:22:02.765 "adrfam": "IPv4", 00:22:02.765 "traddr": "10.0.0.1", 00:22:02.765 "trsvcid": "54942" 00:22:02.765 }, 00:22:02.765 "auth": { 00:22:02.765 "state": "completed", 00:22:02.765 "digest": "sha384", 00:22:02.765 "dhgroup": "ffdhe8192" 00:22:02.765 } 00:22:02.765 } 00:22:02.765 ]' 00:22:02.765 13:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.765 13:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:02.765 13:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.765 13:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.765 13:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.765 13:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.765 13:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.765 13:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.023 13:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:22:04.396 13:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.396 13:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.396 13:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.396 13:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.396 13:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.396 13:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:04.396 13:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:04.396 13:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:04.396 13:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:22:04.396 13:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.396 13:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:04.396 13:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:04.396 13:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:04.397 13:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.397 13:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.397 13:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.397 13:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.397 13:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.397 13:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.397 13:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.329 00:22:05.329 13:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.329 13:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.329 13:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.587 13:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.587 13:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.587 13:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.587 13:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.587 13:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.587 13:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.587 { 00:22:05.587 "cntlid": 93, 00:22:05.587 "qid": 0, 00:22:05.587 "state": "enabled", 00:22:05.587 "thread": "nvmf_tgt_poll_group_000", 00:22:05.587 "listen_address": { 00:22:05.587 "trtype": "TCP", 00:22:05.587 "adrfam": "IPv4", 00:22:05.587 "traddr": "10.0.0.2", 00:22:05.587 "trsvcid": "4420" 00:22:05.587 }, 00:22:05.587 "peer_address": { 00:22:05.587 "trtype": "TCP", 00:22:05.587 "adrfam": "IPv4", 00:22:05.587 "traddr": "10.0.0.1", 00:22:05.587 "trsvcid": "55806" 00:22:05.587 }, 00:22:05.587 "auth": { 00:22:05.587 "state": "completed", 00:22:05.587 "digest": "sha384", 00:22:05.587 "dhgroup": "ffdhe8192" 00:22:05.587 } 00:22:05.587 } 00:22:05.587 ]' 00:22:05.587 13:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.587 13:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:05.587 13:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.587 13:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:05.587 13:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:05.587 13:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.587 13:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.587 13:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.843 13:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.215 13:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.147 00:22:08.147 13:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.147 13:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.147 13:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.405 13:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.405 13:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.405 13:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.405 13:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.405 13:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.405 13:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.405 { 00:22:08.405 "cntlid": 95, 00:22:08.405 "qid": 0, 00:22:08.405 "state": "enabled", 00:22:08.405 "thread": "nvmf_tgt_poll_group_000", 00:22:08.405 "listen_address": { 00:22:08.405 "trtype": "TCP", 00:22:08.405 "adrfam": "IPv4", 00:22:08.405 "traddr": "10.0.0.2", 00:22:08.405 "trsvcid": "4420" 00:22:08.405 }, 00:22:08.405 "peer_address": { 00:22:08.405 "trtype": "TCP", 00:22:08.405 "adrfam": "IPv4", 00:22:08.405 "traddr": "10.0.0.1", 00:22:08.405 "trsvcid": "55832" 00:22:08.405 }, 00:22:08.405 "auth": { 00:22:08.405 "state": "completed", 00:22:08.405 "digest": "sha384", 00:22:08.405 "dhgroup": "ffdhe8192" 00:22:08.405 } 00:22:08.405 } 00:22:08.405 ]' 00:22:08.405 13:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.405 13:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:08.405 13:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.405 13:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:08.405 13:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.405 13:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.405 13:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.405 13:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.663 13:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:22:09.596 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.596 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.596 13:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.596 13:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.853 13:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.853 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:09.853 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:09.853 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.853 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:09.853 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:10.111 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:22:10.111 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.111 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:10.111 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:10.111 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:10.111 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.111 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.111 13:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.111 13:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.111 13:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.111 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.111 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.369 00:22:10.369 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.369 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.369 13:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.627 13:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.627 13:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.627 13:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.627 13:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.627 13:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.627 13:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.627 { 00:22:10.627 "cntlid": 97, 00:22:10.627 "qid": 0, 00:22:10.627 "state": "enabled", 00:22:10.627 "thread": "nvmf_tgt_poll_group_000", 00:22:10.627 "listen_address": { 00:22:10.627 "trtype": "TCP", 00:22:10.627 "adrfam": "IPv4", 00:22:10.627 "traddr": "10.0.0.2", 00:22:10.627 "trsvcid": "4420" 00:22:10.627 }, 00:22:10.627 "peer_address": { 00:22:10.627 "trtype": "TCP", 00:22:10.627 "adrfam": "IPv4", 00:22:10.627 "traddr": "10.0.0.1", 00:22:10.627 "trsvcid": "55850" 00:22:10.627 }, 00:22:10.627 "auth": { 00:22:10.627 "state": "completed", 00:22:10.627 "digest": "sha512", 00:22:10.627 "dhgroup": "null" 00:22:10.627 } 00:22:10.627 } 00:22:10.627 ]' 00:22:10.627 13:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.627 13:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.627 13:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.627 13:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:10.627 13:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.627 13:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.627 13:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.627 13:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.890 13:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:22:11.897 13:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.897 13:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.897 13:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.897 13:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.897 13:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.897 13:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:11.897 13:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:11.897 13:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:12.155 13:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:22:12.155 13:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.155 13:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:12.155 13:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:12.155 13:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:12.155 13:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.155 13:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.155 13:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.155 13:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.155 13:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.155 13:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.155 13:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.412 00:22:12.412 13:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.412 13:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:12.412 13:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.670 13:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.670 13:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.670 13:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.670 13:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.670 13:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.670 13:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.670 { 00:22:12.670 "cntlid": 99, 00:22:12.670 "qid": 0, 00:22:12.670 "state": "enabled", 00:22:12.670 "thread": "nvmf_tgt_poll_group_000", 00:22:12.670 "listen_address": { 00:22:12.670 "trtype": "TCP", 00:22:12.670 "adrfam": "IPv4", 00:22:12.670 "traddr": "10.0.0.2", 00:22:12.670 "trsvcid": "4420" 00:22:12.670 }, 00:22:12.670 "peer_address": { 00:22:12.670 "trtype": "TCP", 00:22:12.670 "adrfam": "IPv4", 00:22:12.670 "traddr": "10.0.0.1", 00:22:12.670 "trsvcid": "55892" 00:22:12.670 }, 00:22:12.670 "auth": { 00:22:12.670 "state": "completed", 00:22:12.670 "digest": "sha512", 00:22:12.670 "dhgroup": "null" 00:22:12.670 } 00:22:12.670 } 00:22:12.670 ]' 00:22:12.670 13:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.670 13:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.670 13:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.670 13:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:12.670 13:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.928 13:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.928 13:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.928 13:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.186 13:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:22:14.117 13:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.117 13:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.117 13:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.117 13:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.117 13:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.117 13:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.117 13:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:14.117 13:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:14.374 13:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:22:14.374 13:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.374 13:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:14.374 13:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:14.374 13:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:14.375 13:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.375 13:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.375 13:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.375 13:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.375 13:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.375 13:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.375 13:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.631 00:22:14.631 13:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.631 13:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.631 13:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.889 13:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.889 13:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.889 13:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.889 13:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.889 13:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.889 13:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.889 { 00:22:14.889 "cntlid": 101, 00:22:14.889 "qid": 0, 00:22:14.889 "state": "enabled", 00:22:14.889 "thread": "nvmf_tgt_poll_group_000", 00:22:14.889 "listen_address": { 00:22:14.889 "trtype": "TCP", 00:22:14.889 "adrfam": "IPv4", 00:22:14.889 "traddr": "10.0.0.2", 00:22:14.889 "trsvcid": "4420" 00:22:14.889 }, 00:22:14.889 "peer_address": { 00:22:14.889 "trtype": "TCP", 00:22:14.889 "adrfam": "IPv4", 00:22:14.889 "traddr": "10.0.0.1", 00:22:14.889 "trsvcid": "40598" 00:22:14.889 }, 00:22:14.889 "auth": { 00:22:14.889 "state": "completed", 00:22:14.889 "digest": "sha512", 00:22:14.889 "dhgroup": "null" 00:22:14.889 } 00:22:14.889 } 00:22:14.889 ]' 00:22:14.889 13:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.889 13:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.889 13:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.889 13:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:14.889 13:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.889 13:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.889 13:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.889 13:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.148 13:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:22:16.521 13:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.521 13:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.521 13:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.521 13:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.521 13:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.521 13:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:16.521 13:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:16.521 13:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:16.521 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:22:16.521 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.521 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:16.521 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:16.521 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:16.521 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.521 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:16.521 13:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.521 13:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.521 13:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.521 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:16.521 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:16.780 00:22:16.780 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.780 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.780 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.039 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.039 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.039 13:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.039 13:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.039 13:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.039 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:17.039 { 00:22:17.039 "cntlid": 103, 00:22:17.039 "qid": 0, 00:22:17.039 "state": "enabled", 00:22:17.039 "thread": "nvmf_tgt_poll_group_000", 00:22:17.039 "listen_address": { 00:22:17.039 "trtype": "TCP", 00:22:17.039 "adrfam": "IPv4", 00:22:17.039 "traddr": "10.0.0.2", 00:22:17.039 "trsvcid": "4420" 00:22:17.039 }, 00:22:17.039 "peer_address": { 00:22:17.039 "trtype": "TCP", 00:22:17.039 "adrfam": "IPv4", 00:22:17.039 "traddr": "10.0.0.1", 00:22:17.039 "trsvcid": "40622" 00:22:17.039 }, 00:22:17.039 "auth": { 00:22:17.039 "state": "completed", 00:22:17.039 "digest": "sha512", 00:22:17.039 "dhgroup": "null" 00:22:17.039 } 00:22:17.039 } 00:22:17.039 ]' 00:22:17.039 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:17.039 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.039 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:17.039 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:17.039 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:17.039 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.039 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.039 13:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.605 13:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:22:18.538 13:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.538 13:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.538 13:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.538 13:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.538 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.103 00:22:19.103 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.103 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.103 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.360 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.360 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.360 13:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.360 13:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.360 13:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.360 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.360 { 00:22:19.360 "cntlid": 105, 00:22:19.360 "qid": 0, 00:22:19.360 "state": "enabled", 00:22:19.360 "thread": "nvmf_tgt_poll_group_000", 00:22:19.360 "listen_address": { 00:22:19.360 "trtype": "TCP", 00:22:19.360 "adrfam": "IPv4", 00:22:19.360 "traddr": "10.0.0.2", 00:22:19.360 "trsvcid": "4420" 00:22:19.360 }, 00:22:19.360 "peer_address": { 00:22:19.360 "trtype": "TCP", 00:22:19.360 "adrfam": "IPv4", 00:22:19.360 "traddr": "10.0.0.1", 00:22:19.360 "trsvcid": "40646" 00:22:19.360 }, 00:22:19.360 "auth": { 00:22:19.360 "state": "completed", 00:22:19.360 "digest": "sha512", 00:22:19.360 "dhgroup": "ffdhe2048" 00:22:19.360 } 00:22:19.360 } 00:22:19.360 ]' 00:22:19.360 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.360 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.360 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:19.360 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:19.360 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:19.360 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.360 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.360 13:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.618 13:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:22:20.551 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.551 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.551 13:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.551 13:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.551 13:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.551 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.551 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:20.551 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:20.809 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:22:20.809 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.809 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:20.809 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:20.809 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:20.809 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.809 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.809 13:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.809 13:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.809 13:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.809 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.809 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.067 00:22:21.067 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:21.067 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:21.067 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.324 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.324 13:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.324 13:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.324 13:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.324 13:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.324 13:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.324 { 00:22:21.324 "cntlid": 107, 00:22:21.324 "qid": 0, 00:22:21.324 "state": "enabled", 00:22:21.324 "thread": "nvmf_tgt_poll_group_000", 00:22:21.324 "listen_address": { 00:22:21.324 "trtype": "TCP", 00:22:21.324 "adrfam": "IPv4", 00:22:21.324 "traddr": "10.0.0.2", 00:22:21.324 "trsvcid": "4420" 00:22:21.324 }, 00:22:21.324 "peer_address": { 00:22:21.324 "trtype": "TCP", 00:22:21.324 "adrfam": "IPv4", 00:22:21.324 "traddr": "10.0.0.1", 00:22:21.324 "trsvcid": "40668" 00:22:21.324 }, 00:22:21.324 "auth": { 00:22:21.324 "state": "completed", 00:22:21.324 "digest": "sha512", 00:22:21.324 "dhgroup": "ffdhe2048" 00:22:21.324 } 00:22:21.324 } 00:22:21.324 ]' 00:22:21.324 13:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.324 13:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.324 13:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.581 13:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:21.581 13:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.581 13:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.581 13:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.581 13:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.839 13:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:22:22.772 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.772 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.772 13:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.772 13:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.772 13:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.772 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:22.772 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:22.772 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:23.031 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:22:23.031 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:23.031 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:23.031 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:23.031 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:23.031 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.031 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.031 13:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.031 13:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.031 13:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.031 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.031 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.289 00:22:23.289 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.289 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.289 13:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.547 13:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.547 13:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.547 13:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.547 13:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.547 13:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.547 13:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.547 { 00:22:23.547 "cntlid": 109, 00:22:23.547 "qid": 0, 00:22:23.547 "state": "enabled", 00:22:23.547 "thread": "nvmf_tgt_poll_group_000", 00:22:23.547 "listen_address": { 00:22:23.547 "trtype": "TCP", 00:22:23.547 "adrfam": "IPv4", 00:22:23.547 "traddr": "10.0.0.2", 00:22:23.547 "trsvcid": "4420" 00:22:23.547 }, 00:22:23.547 "peer_address": { 00:22:23.547 "trtype": "TCP", 00:22:23.547 "adrfam": "IPv4", 00:22:23.547 "traddr": "10.0.0.1", 00:22:23.547 "trsvcid": "36380" 00:22:23.547 }, 00:22:23.547 "auth": { 00:22:23.547 "state": "completed", 00:22:23.547 "digest": "sha512", 00:22:23.547 "dhgroup": "ffdhe2048" 00:22:23.547 } 00:22:23.547 } 00:22:23.547 ]' 00:22:23.547 13:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.547 13:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.547 13:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.547 13:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:23.547 13:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.805 13:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.806 13:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.806 13:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.806 13:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:22:25.208 13:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:25.209 13:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:25.467 00:22:25.467 13:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.467 13:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.467 13:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.725 13:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.725 13:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.725 13:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.725 13:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.982 13:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.982 13:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:25.982 { 00:22:25.982 "cntlid": 111, 00:22:25.982 "qid": 0, 00:22:25.982 "state": "enabled", 00:22:25.982 "thread": "nvmf_tgt_poll_group_000", 00:22:25.982 "listen_address": { 00:22:25.982 "trtype": "TCP", 00:22:25.982 "adrfam": "IPv4", 00:22:25.982 "traddr": "10.0.0.2", 00:22:25.983 "trsvcid": "4420" 00:22:25.983 }, 00:22:25.983 "peer_address": { 00:22:25.983 "trtype": "TCP", 00:22:25.983 "adrfam": "IPv4", 00:22:25.983 "traddr": "10.0.0.1", 00:22:25.983 "trsvcid": "36404" 00:22:25.983 }, 00:22:25.983 "auth": { 00:22:25.983 "state": "completed", 00:22:25.983 "digest": "sha512", 00:22:25.983 "dhgroup": "ffdhe2048" 00:22:25.983 } 00:22:25.983 } 00:22:25.983 ]' 00:22:25.983 13:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:25.983 13:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.983 13:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:25.983 13:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:25.983 13:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:25.983 13:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.983 13:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.983 13:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.241 13:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:22:27.173 13:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.173 13:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.173 13:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.173 13:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.173 13:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.173 13:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:27.173 13:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:27.173 13:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:27.173 13:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:27.737 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:22:27.737 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.737 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:27.737 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:27.737 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:27.738 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.738 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.738 13:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.738 13:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.738 13:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.738 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.738 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.995 00:22:27.995 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:27.995 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:27.995 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.265 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.265 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.265 13:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.265 13:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.265 13:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.265 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:28.265 { 00:22:28.265 "cntlid": 113, 00:22:28.265 "qid": 0, 00:22:28.265 "state": "enabled", 00:22:28.265 "thread": "nvmf_tgt_poll_group_000", 00:22:28.265 "listen_address": { 00:22:28.265 "trtype": "TCP", 00:22:28.265 "adrfam": "IPv4", 00:22:28.265 "traddr": "10.0.0.2", 00:22:28.265 "trsvcid": "4420" 00:22:28.265 }, 00:22:28.265 "peer_address": { 00:22:28.265 "trtype": "TCP", 00:22:28.265 "adrfam": "IPv4", 00:22:28.265 "traddr": "10.0.0.1", 00:22:28.265 "trsvcid": "36450" 00:22:28.265 }, 00:22:28.265 "auth": { 00:22:28.265 "state": "completed", 00:22:28.265 "digest": "sha512", 00:22:28.265 "dhgroup": "ffdhe3072" 00:22:28.265 } 00:22:28.265 } 00:22:28.265 ]' 00:22:28.265 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:28.265 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.265 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:28.265 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:28.265 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:28.265 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.265 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.265 13:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.525 13:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:22:29.455 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.455 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.455 13:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.455 13:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.455 13:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.455 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:29.455 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:29.455 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:29.713 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:22:29.713 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:29.713 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:29.713 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:29.713 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:29.713 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.713 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.713 13:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.713 13:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.713 13:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.713 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.713 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.278 00:22:30.278 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.278 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.278 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.278 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.278 13:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.278 13:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.278 13:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.278 13:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.278 13:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:30.278 { 00:22:30.278 "cntlid": 115, 00:22:30.278 "qid": 0, 00:22:30.278 "state": "enabled", 00:22:30.278 "thread": "nvmf_tgt_poll_group_000", 00:22:30.278 "listen_address": { 00:22:30.278 "trtype": "TCP", 00:22:30.278 "adrfam": "IPv4", 00:22:30.278 "traddr": "10.0.0.2", 00:22:30.278 "trsvcid": "4420" 00:22:30.278 }, 00:22:30.278 "peer_address": { 00:22:30.278 "trtype": "TCP", 00:22:30.278 "adrfam": "IPv4", 00:22:30.278 "traddr": "10.0.0.1", 00:22:30.278 "trsvcid": "36486" 00:22:30.278 }, 00:22:30.278 "auth": { 00:22:30.278 "state": "completed", 00:22:30.278 "digest": "sha512", 00:22:30.278 "dhgroup": "ffdhe3072" 00:22:30.278 } 00:22:30.278 } 00:22:30.278 ]' 00:22:30.278 13:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:30.536 13:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.536 13:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.536 13:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:30.536 13:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.536 13:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.536 13:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.536 13:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.792 13:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:22:31.722 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.722 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.722 13:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.722 13:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.722 13:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.722 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:31.722 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:31.722 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:31.980 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:22:31.980 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:31.980 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:31.980 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:31.980 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:31.980 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.980 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.980 13:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.980 13:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.980 13:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.980 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.980 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.238 00:22:32.238 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:32.238 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.238 13:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:32.495 13:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.495 13:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.495 13:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.495 13:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.495 13:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.495 13:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:32.495 { 00:22:32.495 "cntlid": 117, 00:22:32.495 "qid": 0, 00:22:32.495 "state": "enabled", 00:22:32.495 "thread": "nvmf_tgt_poll_group_000", 00:22:32.495 "listen_address": { 00:22:32.495 "trtype": "TCP", 00:22:32.495 "adrfam": "IPv4", 00:22:32.495 "traddr": "10.0.0.2", 00:22:32.495 "trsvcid": "4420" 00:22:32.495 }, 00:22:32.495 "peer_address": { 00:22:32.495 "trtype": "TCP", 00:22:32.495 "adrfam": "IPv4", 00:22:32.495 "traddr": "10.0.0.1", 00:22:32.495 "trsvcid": "36518" 00:22:32.495 }, 00:22:32.495 "auth": { 00:22:32.495 "state": "completed", 00:22:32.495 "digest": "sha512", 00:22:32.495 "dhgroup": "ffdhe3072" 00:22:32.495 } 00:22:32.495 } 00:22:32.495 ]' 00:22:32.495 13:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:32.752 13:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.752 13:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:32.752 13:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:32.752 13:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:32.752 13:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.752 13:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.752 13:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.010 13:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:22:33.942 13:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.942 13:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.942 13:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.942 13:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.942 13:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.942 13:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:33.942 13:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:33.942 13:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:34.199 13:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:22:34.199 13:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:34.199 13:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:34.199 13:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:34.199 13:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:34.199 13:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.199 13:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:34.199 13:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.199 13:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.199 13:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.199 13:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:34.199 13:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:34.455 00:22:34.455 13:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:34.455 13:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:34.455 13:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.712 13:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.712 13:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.712 13:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.712 13:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.712 13:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.712 13:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:34.712 { 00:22:34.712 "cntlid": 119, 00:22:34.712 "qid": 0, 00:22:34.712 "state": "enabled", 00:22:34.712 "thread": "nvmf_tgt_poll_group_000", 00:22:34.712 "listen_address": { 00:22:34.712 "trtype": "TCP", 00:22:34.712 "adrfam": "IPv4", 00:22:34.712 "traddr": "10.0.0.2", 00:22:34.712 "trsvcid": "4420" 00:22:34.712 }, 00:22:34.712 "peer_address": { 00:22:34.712 "trtype": "TCP", 00:22:34.712 "adrfam": "IPv4", 00:22:34.712 "traddr": "10.0.0.1", 00:22:34.712 "trsvcid": "34162" 00:22:34.712 }, 00:22:34.712 "auth": { 00:22:34.712 "state": "completed", 00:22:34.712 "digest": "sha512", 00:22:34.712 "dhgroup": "ffdhe3072" 00:22:34.712 } 00:22:34.712 } 00:22:34.712 ]' 00:22:34.712 13:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:34.970 13:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:34.970 13:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:34.970 13:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:34.970 13:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:34.970 13:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.970 13:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.970 13:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.227 13:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:22:36.160 13:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.160 13:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.160 13:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.160 13:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.160 13:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.160 13:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:36.160 13:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:36.160 13:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:36.160 13:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:36.418 13:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:22:36.418 13:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:36.418 13:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:36.418 13:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:36.418 13:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:36.418 13:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.418 13:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.418 13:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.418 13:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.418 13:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.418 13:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.418 13:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.676 00:22:36.676 13:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:36.676 13:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.676 13:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:36.934 13:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.934 13:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.934 13:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.934 13:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.934 13:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.934 13:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:36.934 { 00:22:36.934 "cntlid": 121, 00:22:36.934 "qid": 0, 00:22:36.934 "state": "enabled", 00:22:36.934 "thread": "nvmf_tgt_poll_group_000", 00:22:36.934 "listen_address": { 00:22:36.934 "trtype": "TCP", 00:22:36.934 "adrfam": "IPv4", 00:22:36.934 "traddr": "10.0.0.2", 00:22:36.934 "trsvcid": "4420" 00:22:36.934 }, 00:22:36.934 "peer_address": { 00:22:36.934 "trtype": "TCP", 00:22:36.934 "adrfam": "IPv4", 00:22:36.934 "traddr": "10.0.0.1", 00:22:36.934 "trsvcid": "34194" 00:22:36.934 }, 00:22:36.934 "auth": { 00:22:36.934 "state": "completed", 00:22:36.934 "digest": "sha512", 00:22:36.934 "dhgroup": "ffdhe4096" 00:22:36.934 } 00:22:36.934 } 00:22:36.934 ]' 00:22:36.934 13:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:37.191 13:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.191 13:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:37.191 13:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:37.191 13:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:37.191 13:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.191 13:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.191 13:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.449 13:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:22:38.379 13:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.379 13:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.380 13:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.380 13:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.380 13:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.380 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:38.380 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:38.380 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:38.636 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:22:38.636 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:38.636 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:38.636 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:38.636 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:38.636 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.636 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.636 13:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.636 13:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.636 13:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.636 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.636 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.932 00:22:38.932 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:38.932 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:38.932 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.208 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.209 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.209 13:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.209 13:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.209 13:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.209 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:39.209 { 00:22:39.209 "cntlid": 123, 00:22:39.209 "qid": 0, 00:22:39.209 "state": "enabled", 00:22:39.209 "thread": "nvmf_tgt_poll_group_000", 00:22:39.209 "listen_address": { 00:22:39.209 "trtype": "TCP", 00:22:39.209 "adrfam": "IPv4", 00:22:39.209 "traddr": "10.0.0.2", 00:22:39.209 "trsvcid": "4420" 00:22:39.209 }, 00:22:39.209 "peer_address": { 00:22:39.209 "trtype": "TCP", 00:22:39.209 "adrfam": "IPv4", 00:22:39.209 "traddr": "10.0.0.1", 00:22:39.209 "trsvcid": "34226" 00:22:39.209 }, 00:22:39.209 "auth": { 00:22:39.209 "state": "completed", 00:22:39.209 "digest": "sha512", 00:22:39.209 "dhgroup": "ffdhe4096" 00:22:39.209 } 00:22:39.209 } 00:22:39.209 ]' 00:22:39.209 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:39.466 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:39.466 13:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:39.466 13:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:39.466 13:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:39.466 13:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.466 13:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.466 13:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.723 13:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:22:40.655 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.655 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.655 13:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.655 13:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.655 13:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.655 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:40.655 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:40.655 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:40.912 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:22:40.912 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:40.912 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:40.912 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:40.912 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:40.912 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.912 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.912 13:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.912 13:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.912 13:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.912 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.912 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.169 00:22:41.427 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:41.427 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:41.427 13:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.685 13:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.685 13:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.685 13:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.685 13:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.685 13:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.685 13:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:41.685 { 00:22:41.685 "cntlid": 125, 00:22:41.685 "qid": 0, 00:22:41.685 "state": "enabled", 00:22:41.685 "thread": "nvmf_tgt_poll_group_000", 00:22:41.685 "listen_address": { 00:22:41.685 "trtype": "TCP", 00:22:41.685 "adrfam": "IPv4", 00:22:41.685 "traddr": "10.0.0.2", 00:22:41.685 "trsvcid": "4420" 00:22:41.685 }, 00:22:41.685 "peer_address": { 00:22:41.685 "trtype": "TCP", 00:22:41.685 "adrfam": "IPv4", 00:22:41.685 "traddr": "10.0.0.1", 00:22:41.685 "trsvcid": "34242" 00:22:41.685 }, 00:22:41.685 "auth": { 00:22:41.685 "state": "completed", 00:22:41.685 "digest": "sha512", 00:22:41.685 "dhgroup": "ffdhe4096" 00:22:41.685 } 00:22:41.685 } 00:22:41.685 ]' 00:22:41.685 13:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:41.685 13:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:41.685 13:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:41.685 13:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:41.685 13:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:41.685 13:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.685 13:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.685 13:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.943 13:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:22:42.877 13:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.877 13:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.877 13:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.877 13:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.877 13:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.877 13:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:42.877 13:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.877 13:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.135 13:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:43.135 13:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:43.135 13:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:43.135 13:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:43.135 13:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:43.135 13:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.135 13:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:43.135 13:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.135 13:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.393 13:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.393 13:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:43.393 13:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:43.651 00:22:43.651 13:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:43.651 13:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:43.651 13:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.909 13:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.909 13:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.909 13:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.909 13:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.909 13:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.909 13:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:43.909 { 00:22:43.909 "cntlid": 127, 00:22:43.909 "qid": 0, 00:22:43.909 "state": "enabled", 00:22:43.909 "thread": "nvmf_tgt_poll_group_000", 00:22:43.909 "listen_address": { 00:22:43.909 "trtype": "TCP", 00:22:43.909 "adrfam": "IPv4", 00:22:43.909 "traddr": "10.0.0.2", 00:22:43.909 "trsvcid": "4420" 00:22:43.909 }, 00:22:43.909 "peer_address": { 00:22:43.909 "trtype": "TCP", 00:22:43.909 "adrfam": "IPv4", 00:22:43.909 "traddr": "10.0.0.1", 00:22:43.909 "trsvcid": "50250" 00:22:43.909 }, 00:22:43.909 "auth": { 00:22:43.909 "state": "completed", 00:22:43.909 "digest": "sha512", 00:22:43.909 "dhgroup": "ffdhe4096" 00:22:43.909 } 00:22:43.909 } 00:22:43.909 ]' 00:22:43.909 13:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:43.909 13:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.909 13:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:43.909 13:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:43.909 13:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:44.167 13:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.167 13:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.167 13:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.425 13:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:22:45.360 13:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.360 13:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.360 13:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.360 13:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.360 13:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.360 13:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:45.360 13:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:45.360 13:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:45.360 13:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:45.619 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:45.619 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:45.619 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:45.619 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:45.619 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:45.619 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.619 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.619 13:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.619 13:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.619 13:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.619 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.619 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.186 00:22:46.186 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:46.186 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:46.186 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.444 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.444 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.444 13:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.444 13:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.444 13:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.444 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:46.444 { 00:22:46.444 "cntlid": 129, 00:22:46.444 "qid": 0, 00:22:46.444 "state": "enabled", 00:22:46.444 "thread": "nvmf_tgt_poll_group_000", 00:22:46.444 "listen_address": { 00:22:46.444 "trtype": "TCP", 00:22:46.444 "adrfam": "IPv4", 00:22:46.444 "traddr": "10.0.0.2", 00:22:46.444 "trsvcid": "4420" 00:22:46.444 }, 00:22:46.444 "peer_address": { 00:22:46.444 "trtype": "TCP", 00:22:46.444 "adrfam": "IPv4", 00:22:46.444 "traddr": "10.0.0.1", 00:22:46.444 "trsvcid": "50270" 00:22:46.444 }, 00:22:46.444 "auth": { 00:22:46.444 "state": "completed", 00:22:46.444 "digest": "sha512", 00:22:46.444 "dhgroup": "ffdhe6144" 00:22:46.444 } 00:22:46.444 } 00:22:46.444 ]' 00:22:46.444 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:46.444 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:46.444 13:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:46.444 13:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:46.444 13:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:46.444 13:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.444 13:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.444 13:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.702 13:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:22:47.636 13:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.636 13:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.636 13:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.636 13:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.636 13:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.636 13:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:47.636 13:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:47.636 13:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:47.895 13:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:47.895 13:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:47.895 13:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:47.895 13:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:47.895 13:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:47.895 13:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.895 13:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.895 13:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.895 13:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.895 13:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.895 13:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.895 13:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.461 00:22:48.461 13:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:48.461 13:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:48.461 13:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.718 13:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.718 13:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.718 13:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.718 13:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.975 13:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.975 13:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:48.975 { 00:22:48.975 "cntlid": 131, 00:22:48.975 "qid": 0, 00:22:48.975 "state": "enabled", 00:22:48.976 "thread": "nvmf_tgt_poll_group_000", 00:22:48.976 "listen_address": { 00:22:48.976 "trtype": "TCP", 00:22:48.976 "adrfam": "IPv4", 00:22:48.976 "traddr": "10.0.0.2", 00:22:48.976 "trsvcid": "4420" 00:22:48.976 }, 00:22:48.976 "peer_address": { 00:22:48.976 "trtype": "TCP", 00:22:48.976 "adrfam": "IPv4", 00:22:48.976 "traddr": "10.0.0.1", 00:22:48.976 "trsvcid": "50304" 00:22:48.976 }, 00:22:48.976 "auth": { 00:22:48.976 "state": "completed", 00:22:48.976 "digest": "sha512", 00:22:48.976 "dhgroup": "ffdhe6144" 00:22:48.976 } 00:22:48.976 } 00:22:48.976 ]' 00:22:48.976 13:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:48.976 13:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:48.976 13:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:48.976 13:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:48.976 13:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:48.976 13:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.976 13:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.976 13:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.234 13:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:22:50.167 13:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.167 13:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.167 13:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.167 13:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.167 13:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.167 13:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:50.167 13:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:50.167 13:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:50.424 13:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:50.424 13:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:50.424 13:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:50.424 13:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:50.424 13:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:50.424 13:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.424 13:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.424 13:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.424 13:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.424 13:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.424 13:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.424 13:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.989 00:22:50.989 13:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:50.989 13:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:50.989 13:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.247 13:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.247 13:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.247 13:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.247 13:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.247 13:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.247 13:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:51.247 { 00:22:51.247 "cntlid": 133, 00:22:51.247 "qid": 0, 00:22:51.247 "state": "enabled", 00:22:51.247 "thread": "nvmf_tgt_poll_group_000", 00:22:51.247 "listen_address": { 00:22:51.247 "trtype": "TCP", 00:22:51.247 "adrfam": "IPv4", 00:22:51.247 "traddr": "10.0.0.2", 00:22:51.247 "trsvcid": "4420" 00:22:51.247 }, 00:22:51.247 "peer_address": { 00:22:51.247 "trtype": "TCP", 00:22:51.247 "adrfam": "IPv4", 00:22:51.247 "traddr": "10.0.0.1", 00:22:51.247 "trsvcid": "50334" 00:22:51.247 }, 00:22:51.247 "auth": { 00:22:51.247 "state": "completed", 00:22:51.247 "digest": "sha512", 00:22:51.247 "dhgroup": "ffdhe6144" 00:22:51.247 } 00:22:51.247 } 00:22:51.247 ]' 00:22:51.247 13:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:51.505 13:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:51.505 13:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:51.505 13:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:51.505 13:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:51.505 13:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.505 13:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.505 13:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.763 13:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:22:52.696 13:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.696 13:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:52.696 13:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.696 13:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.696 13:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.696 13:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:52.696 13:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:52.696 13:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:52.954 13:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:52.954 13:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:52.954 13:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:52.954 13:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:52.954 13:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:52.954 13:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.954 13:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:52.954 13:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.954 13:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.954 13:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.954 13:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:52.954 13:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:53.519 00:22:53.519 13:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:53.519 13:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:53.519 13:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.777 13:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.777 13:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.777 13:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.777 13:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.777 13:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.777 13:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:53.777 { 00:22:53.777 "cntlid": 135, 00:22:53.777 "qid": 0, 00:22:53.777 "state": "enabled", 00:22:53.777 "thread": "nvmf_tgt_poll_group_000", 00:22:53.777 "listen_address": { 00:22:53.777 "trtype": "TCP", 00:22:53.777 "adrfam": "IPv4", 00:22:53.777 "traddr": "10.0.0.2", 00:22:53.777 "trsvcid": "4420" 00:22:53.777 }, 00:22:53.777 "peer_address": { 00:22:53.777 "trtype": "TCP", 00:22:53.777 "adrfam": "IPv4", 00:22:53.777 "traddr": "10.0.0.1", 00:22:53.777 "trsvcid": "58820" 00:22:53.777 }, 00:22:53.777 "auth": { 00:22:53.777 "state": "completed", 00:22:53.777 "digest": "sha512", 00:22:53.777 "dhgroup": "ffdhe6144" 00:22:53.777 } 00:22:53.777 } 00:22:53.777 ]' 00:22:53.777 13:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:53.777 13:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:53.777 13:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:54.035 13:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:54.035 13:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:54.035 13:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.035 13:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.035 13:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.292 13:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:22:55.225 13:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.225 13:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:55.225 13:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.225 13:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.225 13:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.225 13:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:55.225 13:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:55.225 13:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:55.225 13:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:55.483 13:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:55.483 13:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:55.483 13:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:55.483 13:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:55.483 13:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:55.483 13:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.483 13:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.483 13:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.483 13:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.483 13:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.483 13:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.483 13:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.413 00:22:56.413 13:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:56.413 13:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:56.413 13:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.670 13:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.670 13:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.670 13:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.671 13:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.671 13:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.671 13:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:56.671 { 00:22:56.671 "cntlid": 137, 00:22:56.671 "qid": 0, 00:22:56.671 "state": "enabled", 00:22:56.671 "thread": "nvmf_tgt_poll_group_000", 00:22:56.671 "listen_address": { 00:22:56.671 "trtype": "TCP", 00:22:56.671 "adrfam": "IPv4", 00:22:56.671 "traddr": "10.0.0.2", 00:22:56.671 "trsvcid": "4420" 00:22:56.671 }, 00:22:56.671 "peer_address": { 00:22:56.671 "trtype": "TCP", 00:22:56.671 "adrfam": "IPv4", 00:22:56.671 "traddr": "10.0.0.1", 00:22:56.671 "trsvcid": "58848" 00:22:56.671 }, 00:22:56.671 "auth": { 00:22:56.671 "state": "completed", 00:22:56.671 "digest": "sha512", 00:22:56.671 "dhgroup": "ffdhe8192" 00:22:56.671 } 00:22:56.671 } 00:22:56.671 ]' 00:22:56.671 13:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:56.671 13:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:56.671 13:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:56.671 13:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:56.671 13:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:56.671 13:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.671 13:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.671 13:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.927 13:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:22:57.858 13:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.858 13:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:57.858 13:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.858 13:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.858 13:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.858 13:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:57.858 13:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:57.858 13:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:58.116 13:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:58.116 13:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:58.116 13:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:58.116 13:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:58.116 13:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:58.116 13:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.116 13:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.116 13:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.116 13:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.116 13:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.116 13:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.116 13:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.050 00:22:59.051 13:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:59.051 13:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:59.051 13:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.308 13:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.308 13:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.308 13:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.308 13:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.308 13:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.308 13:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:59.308 { 00:22:59.308 "cntlid": 139, 00:22:59.308 "qid": 0, 00:22:59.308 "state": "enabled", 00:22:59.308 "thread": "nvmf_tgt_poll_group_000", 00:22:59.308 "listen_address": { 00:22:59.308 "trtype": "TCP", 00:22:59.308 "adrfam": "IPv4", 00:22:59.308 "traddr": "10.0.0.2", 00:22:59.308 "trsvcid": "4420" 00:22:59.308 }, 00:22:59.308 "peer_address": { 00:22:59.308 "trtype": "TCP", 00:22:59.308 "adrfam": "IPv4", 00:22:59.308 "traddr": "10.0.0.1", 00:22:59.308 "trsvcid": "58884" 00:22:59.308 }, 00:22:59.308 "auth": { 00:22:59.308 "state": "completed", 00:22:59.308 "digest": "sha512", 00:22:59.308 "dhgroup": "ffdhe8192" 00:22:59.308 } 00:22:59.308 } 00:22:59.308 ]' 00:22:59.308 13:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:59.308 13:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:59.308 13:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:59.566 13:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:59.566 13:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:59.566 13:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.566 13:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.566 13:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.824 13:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MmUzNmVkYmU3YWQzZjIzZGJhNzBiODUxOWExOTIwODEFZedg: --dhchap-ctrl-secret DHHC-1:02:NDQ0OGQ4NGI4NTZlZWQ4NDM1NGFlMDU2OGFiYTI0MjBkMWQ1NjRmMjM0YjVhZjI0St4wNw==: 00:23:00.758 13:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.758 13:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:00.758 13:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.758 13:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.758 13:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.758 13:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:00.758 13:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:00.758 13:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:01.016 13:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:23:01.016 13:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:01.016 13:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:01.016 13:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:01.016 13:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:01.016 13:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.016 13:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.016 13:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.016 13:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.016 13:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.016 13:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.016 13:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.951 00:23:01.951 13:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:01.951 13:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:01.951 13:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.210 13:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.210 13:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.210 13:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.210 13:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.210 13:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.210 13:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:02.210 { 00:23:02.210 "cntlid": 141, 00:23:02.210 "qid": 0, 00:23:02.210 "state": "enabled", 00:23:02.210 "thread": "nvmf_tgt_poll_group_000", 00:23:02.210 "listen_address": { 00:23:02.210 "trtype": "TCP", 00:23:02.210 "adrfam": "IPv4", 00:23:02.210 "traddr": "10.0.0.2", 00:23:02.210 "trsvcid": "4420" 00:23:02.210 }, 00:23:02.210 "peer_address": { 00:23:02.210 "trtype": "TCP", 00:23:02.210 "adrfam": "IPv4", 00:23:02.210 "traddr": "10.0.0.1", 00:23:02.210 "trsvcid": "58898" 00:23:02.210 }, 00:23:02.210 "auth": { 00:23:02.210 "state": "completed", 00:23:02.210 "digest": "sha512", 00:23:02.210 "dhgroup": "ffdhe8192" 00:23:02.210 } 00:23:02.210 } 00:23:02.210 ]' 00:23:02.210 13:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:02.210 13:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.210 13:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:02.210 13:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:02.210 13:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:02.210 13:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.210 13:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.210 13:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.468 13:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTMzMTA2ZDY2YTM2Y2I1MDJlYTYxYzUxZWM1MjcwZTRhZDRlNmQxNzgyOGNlYmM3Cz0dRw==: --dhchap-ctrl-secret DHHC-1:01:YjVmN2Q3NmI1OTVhYzA0OGZhZmMxMjZmYjFiOTNlNDEUNn0p: 00:23:03.402 13:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.402 13:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:03.402 13:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.402 13:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.402 13:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.402 13:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:03.402 13:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:03.402 13:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:03.660 13:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:23:03.660 13:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:03.660 13:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:03.660 13:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:03.660 13:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:03.660 13:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.660 13:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:03.660 13:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.660 13:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.660 13:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.660 13:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:03.660 13:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:04.595 00:23:04.595 13:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:04.595 13:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:04.595 13:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.854 13:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.854 13:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.854 13:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.854 13:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.854 13:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.854 13:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:04.854 { 00:23:04.854 "cntlid": 143, 00:23:04.854 "qid": 0, 00:23:04.854 "state": "enabled", 00:23:04.854 "thread": "nvmf_tgt_poll_group_000", 00:23:04.854 "listen_address": { 00:23:04.854 "trtype": "TCP", 00:23:04.854 "adrfam": "IPv4", 00:23:04.854 "traddr": "10.0.0.2", 00:23:04.854 "trsvcid": "4420" 00:23:04.854 }, 00:23:04.854 "peer_address": { 00:23:04.854 "trtype": "TCP", 00:23:04.854 "adrfam": "IPv4", 00:23:04.854 "traddr": "10.0.0.1", 00:23:04.854 "trsvcid": "57432" 00:23:04.854 }, 00:23:04.854 "auth": { 00:23:04.854 "state": "completed", 00:23:04.854 "digest": "sha512", 00:23:04.854 "dhgroup": "ffdhe8192" 00:23:04.854 } 00:23:04.854 } 00:23:04.854 ]' 00:23:04.854 13:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:04.854 13:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:04.854 13:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:05.112 13:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:05.112 13:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:05.112 13:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.112 13:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.112 13:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.371 13:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:23:06.305 13:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.305 13:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:06.305 13:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.305 13:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.305 13:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.305 13:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:06.305 13:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:23:06.305 13:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:06.305 13:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:06.305 13:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:06.305 13:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:06.563 13:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:23:06.563 13:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:06.563 13:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:06.563 13:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:06.563 13:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:06.563 13:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:06.563 13:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:06.563 13:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.563 13:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.563 13:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.563 13:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:06.563 13:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.530 00:23:07.530 13:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:07.530 13:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:07.530 13:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.789 13:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.789 13:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.789 13:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.789 13:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.789 13:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.789 13:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:07.789 { 00:23:07.789 "cntlid": 145, 00:23:07.789 "qid": 0, 00:23:07.789 "state": "enabled", 00:23:07.789 "thread": "nvmf_tgt_poll_group_000", 00:23:07.789 "listen_address": { 00:23:07.789 "trtype": "TCP", 00:23:07.789 "adrfam": "IPv4", 00:23:07.789 "traddr": "10.0.0.2", 00:23:07.789 "trsvcid": "4420" 00:23:07.789 }, 00:23:07.789 "peer_address": { 00:23:07.789 "trtype": "TCP", 00:23:07.789 "adrfam": "IPv4", 00:23:07.789 "traddr": "10.0.0.1", 00:23:07.789 "trsvcid": "57450" 00:23:07.789 }, 00:23:07.789 "auth": { 00:23:07.789 "state": "completed", 00:23:07.789 "digest": "sha512", 00:23:07.789 "dhgroup": "ffdhe8192" 00:23:07.789 } 00:23:07.789 } 00:23:07.789 ]' 00:23:07.789 13:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:07.789 13:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:07.789 13:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:07.789 13:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:07.789 13:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:07.789 13:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.789 13:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.789 13:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.048 13:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmJjM2I0OGI4MzEyZTMyOTMxYmMyMmVmZTZiNjFkZGM2ZTYyYzU4YzRlNTIxNTRhAgx46g==: --dhchap-ctrl-secret DHHC-1:03:MDFmZmI1MjkzOTg5MjVmMmVjNTU4ZTg1ZTNiOTBmMzZlMWIxNDc0YTU1NmU5Yzc0MmUyZGEyOWY4ZThiNzI0MjLE67g=: 00:23:08.977 13:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:09.235 13:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:10.168 request: 00:23:10.168 { 00:23:10.168 "name": "nvme0", 00:23:10.168 "trtype": "tcp", 00:23:10.168 "traddr": "10.0.0.2", 00:23:10.168 "adrfam": "ipv4", 00:23:10.168 "trsvcid": "4420", 00:23:10.168 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:10.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:10.168 "prchk_reftag": false, 00:23:10.168 "prchk_guard": false, 00:23:10.168 "hdgst": false, 00:23:10.168 "ddgst": false, 00:23:10.168 "dhchap_key": "key2", 00:23:10.168 "method": "bdev_nvme_attach_controller", 00:23:10.168 "req_id": 1 00:23:10.168 } 00:23:10.168 Got JSON-RPC error response 00:23:10.168 response: 00:23:10.168 { 00:23:10.168 "code": -5, 00:23:10.168 "message": "Input/output error" 00:23:10.168 } 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:10.168 13:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:10.732 request: 00:23:10.732 { 00:23:10.732 "name": "nvme0", 00:23:10.732 "trtype": "tcp", 00:23:10.732 "traddr": "10.0.0.2", 00:23:10.732 "adrfam": "ipv4", 00:23:10.732 "trsvcid": "4420", 00:23:10.732 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:10.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:10.732 "prchk_reftag": false, 00:23:10.732 "prchk_guard": false, 00:23:10.732 "hdgst": false, 00:23:10.732 "ddgst": false, 00:23:10.732 "dhchap_key": "key1", 00:23:10.732 "dhchap_ctrlr_key": "ckey2", 00:23:10.732 "method": "bdev_nvme_attach_controller", 00:23:10.732 "req_id": 1 00:23:10.732 } 00:23:10.732 Got JSON-RPC error response 00:23:10.732 response: 00:23:10.732 { 00:23:10.732 "code": -5, 00:23:10.732 "message": "Input/output error" 00:23:10.732 } 00:23:10.732 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:10.732 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:10.732 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:10.732 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:10.732 13:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.732 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.732 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.990 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.990 13:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:23:10.990 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.990 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.990 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.990 13:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.990 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:10.990 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.990 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:10.990 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.990 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:10.990 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.990 13:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.990 13:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.923 request: 00:23:11.923 { 00:23:11.923 "name": "nvme0", 00:23:11.923 "trtype": "tcp", 00:23:11.923 "traddr": "10.0.0.2", 00:23:11.923 "adrfam": "ipv4", 00:23:11.923 "trsvcid": "4420", 00:23:11.923 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:11.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:11.923 "prchk_reftag": false, 00:23:11.923 "prchk_guard": false, 00:23:11.923 "hdgst": false, 00:23:11.923 "ddgst": false, 00:23:11.923 "dhchap_key": "key1", 00:23:11.923 "dhchap_ctrlr_key": "ckey1", 00:23:11.923 "method": "bdev_nvme_attach_controller", 00:23:11.923 "req_id": 1 00:23:11.923 } 00:23:11.923 Got JSON-RPC error response 00:23:11.923 response: 00:23:11.923 { 00:23:11.923 "code": -5, 00:23:11.923 "message": "Input/output error" 00:23:11.923 } 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 294376 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 294376 ']' 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 294376 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 294376 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 294376' 00:23:11.923 killing process with pid 294376 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 294376 00:23:11.923 13:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 294376 00:23:13.298 13:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:13.298 13:34:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:13.298 13:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:13.298 13:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.298 13:34:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=317226 00:23:13.298 13:34:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:13.298 13:34:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 317226 00:23:13.298 13:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 317226 ']' 00:23:13.298 13:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.298 13:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.298 13:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.298 13:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.298 13:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.233 13:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.233 13:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:14.233 13:34:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.233 13:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:14.233 13:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.233 13:34:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.233 13:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:14.233 13:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 317226 00:23:14.233 13:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 317226 ']' 00:23:14.233 13:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.233 13:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.233 13:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.233 13:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.233 13:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.490 13:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.490 13:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:14.490 13:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:23:14.490 13:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.490 13:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.748 13:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.748 13:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:23:14.748 13:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:14.748 13:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:14.748 13:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:14.748 13:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:14.748 13:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.748 13:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:14.748 13:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.748 13:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.748 13:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.748 13:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:14.748 13:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:15.680 00:23:15.680 13:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:15.680 13:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:15.680 13:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.937 13:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.937 13:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.937 13:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.937 13:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.937 13:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.937 13:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:15.937 { 00:23:15.937 "cntlid": 1, 00:23:15.937 "qid": 0, 00:23:15.937 "state": "enabled", 00:23:15.937 "thread": "nvmf_tgt_poll_group_000", 00:23:15.937 "listen_address": { 00:23:15.937 "trtype": "TCP", 00:23:15.937 "adrfam": "IPv4", 00:23:15.937 "traddr": "10.0.0.2", 00:23:15.937 "trsvcid": "4420" 00:23:15.937 }, 00:23:15.937 "peer_address": { 00:23:15.937 "trtype": "TCP", 00:23:15.937 "adrfam": "IPv4", 00:23:15.937 "traddr": "10.0.0.1", 00:23:15.937 "trsvcid": "48600" 00:23:15.937 }, 00:23:15.937 "auth": { 00:23:15.937 "state": "completed", 00:23:15.937 "digest": "sha512", 00:23:15.937 "dhgroup": "ffdhe8192" 00:23:15.937 } 00:23:15.937 } 00:23:15.937 ]' 00:23:15.937 13:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:15.937 13:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:15.937 13:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:16.194 13:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:16.194 13:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:16.194 13:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.194 13:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.194 13:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.452 13:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZWU1MzBiNmVmNzE1MDRiZjQ1MzY2Mjc5NjI3MDE5YmNjNDYzYzFkZTk4MTRmOWRmZTFkMWJjNjU0N2FmMjA0M4BjkRc=: 00:23:17.385 13:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:17.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:17.385 13:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:17.385 13:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.385 13:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.385 13:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.385 13:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:17.385 13:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.385 13:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.385 13:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.385 13:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:17.385 13:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:17.642 13:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:17.642 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:17.642 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:17.642 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:17.642 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.642 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:17.642 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.642 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:17.642 13:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:17.900 request: 00:23:17.900 { 00:23:17.900 "name": "nvme0", 00:23:17.900 "trtype": "tcp", 00:23:17.900 "traddr": "10.0.0.2", 00:23:17.900 "adrfam": "ipv4", 00:23:17.900 "trsvcid": "4420", 00:23:17.900 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:17.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:17.900 "prchk_reftag": false, 00:23:17.900 "prchk_guard": false, 00:23:17.900 "hdgst": false, 00:23:17.900 "ddgst": false, 00:23:17.900 "dhchap_key": "key3", 00:23:17.900 "method": "bdev_nvme_attach_controller", 00:23:17.900 "req_id": 1 00:23:17.900 } 00:23:17.900 Got JSON-RPC error response 00:23:17.900 response: 00:23:17.900 { 00:23:17.900 "code": -5, 00:23:17.900 "message": "Input/output error" 00:23:17.900 } 00:23:17.900 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:17.900 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:17.900 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:17.901 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:17.901 13:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:23:17.901 13:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:23:17.901 13:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:17.901 13:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:18.159 13:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:18.159 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:18.159 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:18.159 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:18.159 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.159 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:18.159 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.159 13:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:18.159 13:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:18.416 request: 00:23:18.416 { 00:23:18.416 "name": "nvme0", 00:23:18.416 "trtype": "tcp", 00:23:18.416 "traddr": "10.0.0.2", 00:23:18.416 "adrfam": "ipv4", 00:23:18.416 "trsvcid": "4420", 00:23:18.416 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:18.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:18.416 "prchk_reftag": false, 00:23:18.416 "prchk_guard": false, 00:23:18.416 "hdgst": false, 00:23:18.416 "ddgst": false, 00:23:18.416 "dhchap_key": "key3", 00:23:18.416 "method": "bdev_nvme_attach_controller", 00:23:18.416 "req_id": 1 00:23:18.416 } 00:23:18.416 Got JSON-RPC error response 00:23:18.416 response: 00:23:18.416 { 00:23:18.416 "code": -5, 00:23:18.416 "message": "Input/output error" 00:23:18.416 } 00:23:18.416 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:18.416 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:18.416 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:18.416 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:18.416 13:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:18.416 13:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:23:18.416 13:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:18.416 13:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:18.416 13:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:18.416 13:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:18.982 13:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:19.238 request: 00:23:19.238 { 00:23:19.238 "name": "nvme0", 00:23:19.238 "trtype": "tcp", 00:23:19.238 "traddr": "10.0.0.2", 00:23:19.238 "adrfam": "ipv4", 00:23:19.238 "trsvcid": "4420", 00:23:19.238 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:19.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:19.238 "prchk_reftag": false, 00:23:19.238 "prchk_guard": false, 00:23:19.238 "hdgst": false, 00:23:19.238 "ddgst": false, 00:23:19.238 "dhchap_key": "key0", 00:23:19.238 "dhchap_ctrlr_key": "key1", 00:23:19.238 "method": "bdev_nvme_attach_controller", 00:23:19.238 "req_id": 1 00:23:19.238 } 00:23:19.238 Got JSON-RPC error response 00:23:19.238 response: 00:23:19.238 { 00:23:19.238 "code": -5, 00:23:19.238 "message": "Input/output error" 00:23:19.238 } 00:23:19.238 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:19.238 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:19.238 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:19.238 13:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:19.238 13:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:19.238 13:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:19.495 00:23:19.495 13:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:23:19.495 13:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:23:19.495 13:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.752 13:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.752 13:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.752 13:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:20.010 13:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:23:20.010 13:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:23:20.010 13:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 294524 00:23:20.010 13:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 294524 ']' 00:23:20.010 13:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 294524 00:23:20.010 13:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:20.010 13:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.010 13:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 294524 00:23:20.010 13:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:20.010 13:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:20.010 13:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 294524' 00:23:20.010 killing process with pid 294524 00:23:20.010 13:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 294524 00:23:20.010 13:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 294524 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:22.568 rmmod nvme_tcp 00:23:22.568 rmmod nvme_fabrics 00:23:22.568 rmmod nvme_keyring 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 317226 ']' 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 317226 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 317226 ']' 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 317226 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 317226 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 317226' 00:23:22.568 killing process with pid 317226 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 317226 00:23:22.568 13:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 317226 00:23:23.941 13:34:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:23.941 13:34:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:23.941 13:34:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:23.941 13:34:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:23.941 13:34:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:23.941 13:34:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.941 13:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.941 13:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.842 13:35:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:25.842 13:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.lfO /tmp/spdk.key-sha256.wFv /tmp/spdk.key-sha384.Nso /tmp/spdk.key-sha512.qKF /tmp/spdk.key-sha512.gaC /tmp/spdk.key-sha384.2Dc /tmp/spdk.key-sha256.2kF '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:25.842 00:23:25.842 real 3m15.223s 00:23:25.842 user 7m30.082s 00:23:25.842 sys 0m24.774s 00:23:25.842 13:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:25.842 13:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.842 ************************************ 00:23:25.842 END TEST nvmf_auth_target 00:23:25.842 ************************************ 00:23:25.842 13:35:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:25.842 13:35:00 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:23:25.842 13:35:00 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:25.842 13:35:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:25.842 13:35:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.842 13:35:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.842 ************************************ 00:23:25.842 START TEST nvmf_bdevio_no_huge 00:23:25.842 ************************************ 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:25.842 * Looking for test storage... 00:23:25.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.842 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:23:25.843 13:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:27.741 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.741 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:27.742 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:27.742 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:27.742 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.742 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:28.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:23:28.000 00:23:28.000 --- 10.0.0.2 ping statistics --- 00:23:28.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.000 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:23:28.000 00:23:28.000 --- 10.0.0.1 ping statistics --- 00:23:28.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.000 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=320442 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 320442 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 320442 ']' 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:28.000 13:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.000 [2024-07-13 13:35:02.697957] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:28.000 [2024-07-13 13:35:02.698111] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:28.264 [2024-07-13 13:35:02.851592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.522 [2024-07-13 13:35:03.117595] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.522 [2024-07-13 13:35:03.117682] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.522 [2024-07-13 13:35:03.117711] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.522 [2024-07-13 13:35:03.117733] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.522 [2024-07-13 13:35:03.117755] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.522 [2024-07-13 13:35:03.117904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:28.522 [2024-07-13 13:35:03.117978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:23:28.522 [2024-07-13 13:35:03.118000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.522 [2024-07-13 13:35:03.118016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:29.087 [2024-07-13 13:35:03.683108] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:29.087 Malloc0 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:29.087 [2024-07-13 13:35:03.772783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:29.087 { 00:23:29.087 "params": { 00:23:29.087 "name": "Nvme$subsystem", 00:23:29.087 "trtype": "$TEST_TRANSPORT", 00:23:29.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.087 "adrfam": "ipv4", 00:23:29.087 "trsvcid": "$NVMF_PORT", 00:23:29.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.087 "hdgst": ${hdgst:-false}, 00:23:29.087 "ddgst": ${ddgst:-false} 00:23:29.087 }, 00:23:29.087 "method": "bdev_nvme_attach_controller" 00:23:29.087 } 00:23:29.087 EOF 00:23:29.087 )") 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:23:29.087 13:35:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:29.087 "params": { 00:23:29.087 "name": "Nvme1", 00:23:29.087 "trtype": "tcp", 00:23:29.087 "traddr": "10.0.0.2", 00:23:29.087 "adrfam": "ipv4", 00:23:29.087 "trsvcid": "4420", 00:23:29.087 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.087 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.087 "hdgst": false, 00:23:29.087 "ddgst": false 00:23:29.087 }, 00:23:29.087 "method": "bdev_nvme_attach_controller" 00:23:29.087 }' 00:23:29.345 [2024-07-13 13:35:03.858682] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:29.345 [2024-07-13 13:35:03.858799] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid320624 ] 00:23:29.345 [2024-07-13 13:35:03.998430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:29.603 [2024-07-13 13:35:04.252515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.603 [2024-07-13 13:35:04.252560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.603 [2024-07-13 13:35:04.252565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.169 I/O targets: 00:23:30.169 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:30.169 00:23:30.169 00:23:30.169 CUnit - A unit testing framework for C - Version 2.1-3 00:23:30.169 http://cunit.sourceforge.net/ 00:23:30.169 00:23:30.169 00:23:30.169 Suite: bdevio tests on: Nvme1n1 00:23:30.169 Test: blockdev write read block ...passed 00:23:30.169 Test: blockdev write zeroes read block ...passed 00:23:30.169 Test: blockdev write zeroes read no split ...passed 00:23:30.427 Test: blockdev write zeroes read split ...passed 00:23:30.427 Test: blockdev write zeroes read split partial ...passed 00:23:30.427 Test: blockdev reset ...[2024-07-13 13:35:05.008373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:30.427 [2024-07-13 13:35:05.008573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:23:30.427 [2024-07-13 13:35:05.067482] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:30.427 passed 00:23:30.427 Test: blockdev write read 8 blocks ...passed 00:23:30.427 Test: blockdev write read size > 128k ...passed 00:23:30.427 Test: blockdev write read invalid size ...passed 00:23:30.427 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:30.427 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:30.427 Test: blockdev write read max offset ...passed 00:23:30.686 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:30.686 Test: blockdev writev readv 8 blocks ...passed 00:23:30.686 Test: blockdev writev readv 30 x 1block ...passed 00:23:30.686 Test: blockdev writev readv block ...passed 00:23:30.686 Test: blockdev writev readv size > 128k ...passed 00:23:30.686 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:30.686 Test: blockdev comparev and writev ...[2024-07-13 13:35:05.331388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:30.686 [2024-07-13 13:35:05.331467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.686 [2024-07-13 13:35:05.331505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:30.686 [2024-07-13 13:35:05.331532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:30.686 [2024-07-13 13:35:05.332086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:30.686 [2024-07-13 13:35:05.332125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:30.686 [2024-07-13 13:35:05.332160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:30.686 [2024-07-13 13:35:05.332185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:30.686 [2024-07-13 13:35:05.332680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:30.686 [2024-07-13 13:35:05.332712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:30.686 [2024-07-13 13:35:05.332750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:30.686 [2024-07-13 13:35:05.332776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:30.686 [2024-07-13 13:35:05.333249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:30.686 [2024-07-13 13:35:05.333281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:30.686 [2024-07-13 13:35:05.333312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:30.686 [2024-07-13 13:35:05.333342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:30.686 passed 00:23:30.686 Test: blockdev nvme passthru rw ...passed 00:23:30.686 Test: blockdev nvme passthru vendor specific ...[2024-07-13 13:35:05.416418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:30.686 [2024-07-13 13:35:05.416475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:30.686 [2024-07-13 13:35:05.416758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:30.686 [2024-07-13 13:35:05.416790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:30.686 [2024-07-13 13:35:05.417044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:30.686 [2024-07-13 13:35:05.417075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:30.686 [2024-07-13 13:35:05.417365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:30.686 [2024-07-13 13:35:05.417397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:30.686 passed 00:23:30.686 Test: blockdev nvme admin passthru ...passed 00:23:30.944 Test: blockdev copy ...passed 00:23:30.944 00:23:30.944 Run Summary: Type Total Ran Passed Failed Inactive 00:23:30.944 suites 1 1 n/a 0 0 00:23:30.944 tests 23 23 23 0 0 00:23:30.944 asserts 152 152 152 0 n/a 00:23:30.944 00:23:30.944 Elapsed time = 1.345 seconds 00:23:31.510 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:31.510 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.510 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:31.510 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.510 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:31.510 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:31.510 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:31.510 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:23:31.510 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:31.510 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:23:31.510 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:31.510 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:31.510 rmmod nvme_tcp 00:23:31.510 rmmod nvme_fabrics 00:23:31.510 rmmod nvme_keyring 00:23:31.510 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:31.777 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:23:31.777 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:23:31.777 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 320442 ']' 00:23:31.777 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 320442 00:23:31.777 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 320442 ']' 00:23:31.778 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 320442 00:23:31.778 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:23:31.778 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:31.778 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 320442 00:23:31.778 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:23:31.778 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:23:31.778 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 320442' 00:23:31.778 killing process with pid 320442 00:23:31.778 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 320442 00:23:31.778 13:35:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 320442 00:23:32.714 13:35:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:32.714 13:35:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:32.714 13:35:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:32.714 13:35:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:32.714 13:35:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:32.714 13:35:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.714 13:35:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:32.714 13:35:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.615 13:35:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:34.615 00:23:34.615 real 0m8.823s 00:23:34.615 user 0m20.507s 00:23:34.616 sys 0m2.814s 00:23:34.616 13:35:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:34.616 13:35:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:34.616 ************************************ 00:23:34.616 END TEST nvmf_bdevio_no_huge 00:23:34.616 ************************************ 00:23:34.616 13:35:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:34.616 13:35:09 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:34.616 13:35:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:34.616 13:35:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:34.616 13:35:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:34.616 ************************************ 00:23:34.616 START TEST nvmf_tls 00:23:34.616 ************************************ 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:34.616 * Looking for test storage... 00:23:34.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:23:34.616 13:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:36.516 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:36.516 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:36.516 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.516 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:36.517 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.517 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:36.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:23:36.775 00:23:36.775 --- 10.0.0.2 ping statistics --- 00:23:36.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.775 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:23:36.775 00:23:36.775 --- 10.0.0.1 ping statistics --- 00:23:36.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.775 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=322870 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 322870 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 322870 ']' 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.775 13:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.775 [2024-07-13 13:35:11.390159] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:36.775 [2024-07-13 13:35:11.390310] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.775 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.033 [2024-07-13 13:35:11.523715] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.033 [2024-07-13 13:35:11.749205] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.033 [2024-07-13 13:35:11.749270] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.033 [2024-07-13 13:35:11.749309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.033 [2024-07-13 13:35:11.749330] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.033 [2024-07-13 13:35:11.749349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.033 [2024-07-13 13:35:11.749390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.598 13:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:37.598 13:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:37.598 13:35:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:37.598 13:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:37.598 13:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.598 13:35:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.598 13:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:37.598 13:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:38.192 true 00:23:38.192 13:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:38.192 13:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:23:38.192 13:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:23:38.192 13:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:38.192 13:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:38.450 13:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:38.450 13:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:23:38.708 13:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:23:38.708 13:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:38.708 13:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:38.966 13:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:38.966 13:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:23:39.223 13:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:23:39.224 13:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:39.224 13:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:39.224 13:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:39.480 13:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:23:39.480 13:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:39.480 13:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:39.737 13:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:39.737 13:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:39.995 13:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:23:39.995 13:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:39.995 13:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:40.254 13:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:40.254 13:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:40.512 13:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.QpnaoGhCcJ 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.wM9TMovaaH 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.QpnaoGhCcJ 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.wM9TMovaaH 00:23:40.513 13:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:40.771 13:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:41.703 13:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.QpnaoGhCcJ 00:23:41.703 13:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QpnaoGhCcJ 00:23:41.703 13:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:41.703 [2024-07-13 13:35:16.412014] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.703 13:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:42.270 13:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:42.270 [2024-07-13 13:35:16.969598] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:42.270 [2024-07-13 13:35:16.969918] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.270 13:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:42.836 malloc0 00:23:42.836 13:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:42.836 13:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QpnaoGhCcJ 00:23:43.095 [2024-07-13 13:35:17.743548] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:43.095 13:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.QpnaoGhCcJ 00:23:43.095 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.291 Initializing NVMe Controllers 00:23:55.291 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:55.291 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:55.291 Initialization complete. Launching workers. 00:23:55.291 ======================================================== 00:23:55.291 Latency(us) 00:23:55.291 Device Information : IOPS MiB/s Average min max 00:23:55.291 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5473.10 21.38 11698.96 2433.42 12949.05 00:23:55.291 ======================================================== 00:23:55.291 Total : 5473.10 21.38 11698.96 2433.42 12949.05 00:23:55.291 00:23:55.291 13:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QpnaoGhCcJ 00:23:55.291 13:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:55.291 13:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:55.291 13:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:55.291 13:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QpnaoGhCcJ' 00:23:55.291 13:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:55.291 13:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=324884 00:23:55.291 13:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:55.291 13:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:55.291 13:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 324884 /var/tmp/bdevperf.sock 00:23:55.291 13:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 324884 ']' 00:23:55.291 13:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.291 13:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:55.291 13:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.291 13:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:55.291 13:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.291 [2024-07-13 13:35:28.067181] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:55.291 [2024-07-13 13:35:28.067323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324884 ] 00:23:55.291 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.291 [2024-07-13 13:35:28.191137] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.291 [2024-07-13 13:35:28.420225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.291 13:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:55.291 13:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:55.291 13:35:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QpnaoGhCcJ 00:23:55.291 [2024-07-13 13:35:29.292897] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.291 [2024-07-13 13:35:29.293105] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:55.291 TLSTESTn1 00:23:55.291 13:35:29 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:55.291 Running I/O for 10 seconds... 00:24:05.253 00:24:05.253 Latency(us) 00:24:05.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.253 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:05.253 Verification LBA range: start 0x0 length 0x2000 00:24:05.253 TLSTESTn1 : 10.04 2628.85 10.27 0.00 0.00 48554.61 7864.32 70293.43 00:24:05.253 =================================================================================================================== 00:24:05.253 Total : 2628.85 10.27 0.00 0.00 48554.61 7864.32 70293.43 00:24:05.253 0 00:24:05.253 13:35:39 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:05.253 13:35:39 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 324884 00:24:05.253 13:35:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 324884 ']' 00:24:05.253 13:35:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 324884 00:24:05.253 13:35:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:05.253 13:35:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:05.253 13:35:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 324884 00:24:05.253 13:35:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:05.253 13:35:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:05.253 13:35:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 324884' 00:24:05.253 killing process with pid 324884 00:24:05.253 13:35:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 324884 00:24:05.253 Received shutdown signal, test time was about 10.000000 seconds 00:24:05.253 00:24:05.253 Latency(us) 00:24:05.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.253 =================================================================================================================== 00:24:05.253 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.253 [2024-07-13 13:35:39.627317] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:05.253 13:35:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 324884 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wM9TMovaaH 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wM9TMovaaH 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wM9TMovaaH 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wM9TMovaaH' 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=326337 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 326337 /var/tmp/bdevperf.sock 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 326337 ']' 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:06.188 13:35:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.188 [2024-07-13 13:35:40.723246] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:06.188 [2024-07-13 13:35:40.723392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326337 ] 00:24:06.188 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.188 [2024-07-13 13:35:40.844722] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.447 [2024-07-13 13:35:41.068311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.049 13:35:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:07.049 13:35:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:07.049 13:35:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wM9TMovaaH 00:24:07.312 [2024-07-13 13:35:41.904137] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:07.312 [2024-07-13 13:35:41.904351] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:07.312 [2024-07-13 13:35:41.914495] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:07.312 [2024-07-13 13:35:41.915240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:24:07.312 [2024-07-13 13:35:41.916233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:24:07.312 [2024-07-13 13:35:41.917223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.312 [2024-07-13 13:35:41.917254] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:07.312 [2024-07-13 13:35:41.917295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.312 request: 00:24:07.312 { 00:24:07.312 "name": "TLSTEST", 00:24:07.312 "trtype": "tcp", 00:24:07.312 "traddr": "10.0.0.2", 00:24:07.312 "adrfam": "ipv4", 00:24:07.312 "trsvcid": "4420", 00:24:07.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.312 "prchk_reftag": false, 00:24:07.312 "prchk_guard": false, 00:24:07.312 "hdgst": false, 00:24:07.312 "ddgst": false, 00:24:07.312 "psk": "/tmp/tmp.wM9TMovaaH", 00:24:07.312 "method": "bdev_nvme_attach_controller", 00:24:07.312 "req_id": 1 00:24:07.312 } 00:24:07.312 Got JSON-RPC error response 00:24:07.312 response: 00:24:07.312 { 00:24:07.312 "code": -5, 00:24:07.312 "message": "Input/output error" 00:24:07.312 } 00:24:07.312 13:35:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 326337 00:24:07.312 13:35:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 326337 ']' 00:24:07.312 13:35:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 326337 00:24:07.312 13:35:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:07.312 13:35:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:07.312 13:35:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 326337 00:24:07.312 13:35:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:07.312 13:35:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:07.312 13:35:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 326337' 00:24:07.312 killing process with pid 326337 00:24:07.312 13:35:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 326337 00:24:07.312 Received shutdown signal, test time was about 10.000000 seconds 00:24:07.312 00:24:07.312 Latency(us) 00:24:07.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.312 =================================================================================================================== 00:24:07.312 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:07.312 13:35:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 326337 00:24:07.312 [2024-07-13 13:35:41.959281] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:08.263 13:35:42 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:08.263 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:08.263 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:08.263 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:08.263 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:08.263 13:35:42 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QpnaoGhCcJ 00:24:08.263 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:08.263 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QpnaoGhCcJ 00:24:08.263 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:08.263 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:08.263 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:08.263 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:08.264 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QpnaoGhCcJ 00:24:08.264 13:35:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:08.264 13:35:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:08.264 13:35:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:08.264 13:35:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QpnaoGhCcJ' 00:24:08.264 13:35:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:08.264 13:35:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=326611 00:24:08.264 13:35:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:08.264 13:35:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:08.264 13:35:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 326611 /var/tmp/bdevperf.sock 00:24:08.264 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 326611 ']' 00:24:08.264 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.264 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.264 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.264 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.264 13:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.264 [2024-07-13 13:35:42.963287] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:08.266 [2024-07-13 13:35:42.963431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326611 ] 00:24:08.528 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.528 [2024-07-13 13:35:43.083267] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.786 [2024-07-13 13:35:43.309411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.351 13:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.351 13:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:09.351 13:35:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.QpnaoGhCcJ 00:24:09.351 [2024-07-13 13:35:44.092437] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:09.351 [2024-07-13 13:35:44.092663] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:09.609 [2024-07-13 13:35:44.103466] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:09.609 [2024-07-13 13:35:44.103510] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:09.609 [2024-07-13 13:35:44.103594] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:09.609 [2024-07-13 13:35:44.104546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:24:09.609 [2024-07-13 13:35:44.105522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:24:09.609 [2024-07-13 13:35:44.106514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.609 [2024-07-13 13:35:44.106564] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:09.609 [2024-07-13 13:35:44.106592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.609 request: 00:24:09.609 { 00:24:09.609 "name": "TLSTEST", 00:24:09.609 "trtype": "tcp", 00:24:09.609 "traddr": "10.0.0.2", 00:24:09.609 "adrfam": "ipv4", 00:24:09.609 "trsvcid": "4420", 00:24:09.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.609 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:09.609 "prchk_reftag": false, 00:24:09.609 "prchk_guard": false, 00:24:09.609 "hdgst": false, 00:24:09.609 "ddgst": false, 00:24:09.609 "psk": "/tmp/tmp.QpnaoGhCcJ", 00:24:09.609 "method": "bdev_nvme_attach_controller", 00:24:09.609 "req_id": 1 00:24:09.609 } 00:24:09.609 Got JSON-RPC error response 00:24:09.609 response: 00:24:09.609 { 00:24:09.609 "code": -5, 00:24:09.609 "message": "Input/output error" 00:24:09.609 } 00:24:09.609 13:35:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 326611 00:24:09.609 13:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 326611 ']' 00:24:09.609 13:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 326611 00:24:09.609 13:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:09.609 13:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:09.609 13:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 326611 00:24:09.609 13:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:09.609 13:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:09.609 13:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 326611' 00:24:09.609 killing process with pid 326611 00:24:09.609 13:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 326611 00:24:09.609 Received shutdown signal, test time was about 10.000000 seconds 00:24:09.609 00:24:09.609 Latency(us) 00:24:09.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.609 =================================================================================================================== 00:24:09.609 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:09.609 13:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 326611 00:24:09.609 [2024-07-13 13:35:44.148391] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:10.543 13:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:10.543 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:10.543 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:10.543 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:10.543 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:10.543 13:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QpnaoGhCcJ 00:24:10.543 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:10.543 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QpnaoGhCcJ 00:24:10.543 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:10.543 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:10.543 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QpnaoGhCcJ 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QpnaoGhCcJ' 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=326881 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 326881 /var/tmp/bdevperf.sock 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 326881 ']' 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:10.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.544 13:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.544 [2024-07-13 13:35:45.176861] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:10.544 [2024-07-13 13:35:45.177015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326881 ] 00:24:10.544 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.802 [2024-07-13 13:35:45.319410] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.060 [2024-07-13 13:35:45.673988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.627 13:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:11.627 13:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:11.627 13:35:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QpnaoGhCcJ 00:24:11.885 [2024-07-13 13:35:46.384450] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.885 [2024-07-13 13:35:46.384705] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:11.885 [2024-07-13 13:35:46.399244] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:11.885 [2024-07-13 13:35:46.399288] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:11.885 [2024-07-13 13:35:46.399363] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:11.885 [2024-07-13 13:35:46.400166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:24:11.885 [2024-07-13 13:35:46.401130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:24:11.885 [2024-07-13 13:35:46.402118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:11.885 [2024-07-13 13:35:46.402180] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:11.885 [2024-07-13 13:35:46.402216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:11.885 request: 00:24:11.885 { 00:24:11.885 "name": "TLSTEST", 00:24:11.885 "trtype": "tcp", 00:24:11.885 "traddr": "10.0.0.2", 00:24:11.885 "adrfam": "ipv4", 00:24:11.885 "trsvcid": "4420", 00:24:11.885 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:11.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:11.885 "prchk_reftag": false, 00:24:11.885 "prchk_guard": false, 00:24:11.885 "hdgst": false, 00:24:11.885 "ddgst": false, 00:24:11.885 "psk": "/tmp/tmp.QpnaoGhCcJ", 00:24:11.885 "method": "bdev_nvme_attach_controller", 00:24:11.885 "req_id": 1 00:24:11.885 } 00:24:11.885 Got JSON-RPC error response 00:24:11.885 response: 00:24:11.885 { 00:24:11.885 "code": -5, 00:24:11.885 "message": "Input/output error" 00:24:11.885 } 00:24:11.885 13:35:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 326881 00:24:11.885 13:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 326881 ']' 00:24:11.885 13:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 326881 00:24:11.885 13:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:11.885 13:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.885 13:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 326881 00:24:11.885 13:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:11.885 13:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:11.885 13:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 326881' 00:24:11.885 killing process with pid 326881 00:24:11.885 13:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 326881 00:24:11.885 Received shutdown signal, test time was about 10.000000 seconds 00:24:11.885 00:24:11.885 Latency(us) 00:24:11.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.886 =================================================================================================================== 00:24:11.886 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:11.886 [2024-07-13 13:35:46.451714] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:11.886 13:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 326881 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=327158 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 327158 /var/tmp/bdevperf.sock 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 327158 ']' 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.821 13:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.821 [2024-07-13 13:35:47.515429] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:12.821 [2024-07-13 13:35:47.515573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327158 ] 00:24:13.079 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.079 [2024-07-13 13:35:47.658028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.337 [2024-07-13 13:35:48.012383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.903 13:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.903 13:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:13.903 13:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:14.161 [2024-07-13 13:35:48.710744] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:14.161 [2024-07-13 13:35:48.713064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:24:14.161 [2024-07-13 13:35:48.714050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:14.161 [2024-07-13 13:35:48.714086] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:14.161 [2024-07-13 13:35:48.714110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:14.161 request: 00:24:14.161 { 00:24:14.161 "name": "TLSTEST", 00:24:14.161 "trtype": "tcp", 00:24:14.161 "traddr": "10.0.0.2", 00:24:14.161 "adrfam": "ipv4", 00:24:14.161 "trsvcid": "4420", 00:24:14.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:14.161 "prchk_reftag": false, 00:24:14.161 "prchk_guard": false, 00:24:14.161 "hdgst": false, 00:24:14.161 "ddgst": false, 00:24:14.161 "method": "bdev_nvme_attach_controller", 00:24:14.161 "req_id": 1 00:24:14.161 } 00:24:14.161 Got JSON-RPC error response 00:24:14.161 response: 00:24:14.161 { 00:24:14.161 "code": -5, 00:24:14.161 "message": "Input/output error" 00:24:14.161 } 00:24:14.161 13:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 327158 00:24:14.161 13:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 327158 ']' 00:24:14.161 13:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 327158 00:24:14.161 13:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:14.161 13:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:14.161 13:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 327158 00:24:14.161 13:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:14.161 13:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:14.161 13:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 327158' 00:24:14.161 killing process with pid 327158 00:24:14.161 13:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 327158 00:24:14.161 Received shutdown signal, test time was about 10.000000 seconds 00:24:14.161 00:24:14.161 Latency(us) 00:24:14.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.161 =================================================================================================================== 00:24:14.161 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:14.161 13:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 327158 00:24:15.096 13:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:15.096 13:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:15.096 13:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:15.096 13:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:15.096 13:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:15.096 13:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 322870 00:24:15.096 13:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 322870 ']' 00:24:15.096 13:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 322870 00:24:15.096 13:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:15.096 13:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:15.096 13:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 322870 00:24:15.096 13:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:15.096 13:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:15.096 13:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 322870' 00:24:15.096 killing process with pid 322870 00:24:15.096 13:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 322870 00:24:15.096 [2024-07-13 13:35:49.725058] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:15.096 13:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 322870 00:24:16.471 13:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:16.471 13:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:16.471 13:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.471 13:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:16.471 13:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:16.471 13:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:24:16.471 13:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.uVXmLoEiac 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.uVXmLoEiac 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=327571 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 327571 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 327571 ']' 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.728 13:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.728 [2024-07-13 13:35:51.355795] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:16.728 [2024-07-13 13:35:51.355957] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.728 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.985 [2024-07-13 13:35:51.497318] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.243 [2024-07-13 13:35:51.754290] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.243 [2024-07-13 13:35:51.754369] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.243 [2024-07-13 13:35:51.754400] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.243 [2024-07-13 13:35:51.754426] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.243 [2024-07-13 13:35:51.754447] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.243 [2024-07-13 13:35:51.754497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.808 13:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.808 13:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:17.808 13:35:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:17.808 13:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:17.808 13:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.808 13:35:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.808 13:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.uVXmLoEiac 00:24:17.808 13:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.uVXmLoEiac 00:24:17.808 13:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:18.066 [2024-07-13 13:35:52.595782] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.066 13:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:18.324 13:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:18.581 [2024-07-13 13:35:53.097154] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:18.581 [2024-07-13 13:35:53.097467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.581 13:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:18.839 malloc0 00:24:18.839 13:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:19.097 13:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uVXmLoEiac 00:24:19.355 [2024-07-13 13:35:53.975183] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:19.355 13:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uVXmLoEiac 00:24:19.355 13:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:19.355 13:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:19.355 13:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:19.355 13:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uVXmLoEiac' 00:24:19.355 13:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:19.355 13:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=327987 00:24:19.355 13:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:19.355 13:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:19.355 13:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 327987 /var/tmp/bdevperf.sock 00:24:19.355 13:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 327987 ']' 00:24:19.355 13:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.355 13:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.355 13:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.355 13:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.355 13:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.355 [2024-07-13 13:35:54.067087] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:19.355 [2024-07-13 13:35:54.067227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327987 ] 00:24:19.612 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.612 [2024-07-13 13:35:54.189763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.870 [2024-07-13 13:35:54.412416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.435 13:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:20.435 13:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:20.435 13:35:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uVXmLoEiac 00:24:20.724 [2024-07-13 13:35:55.241058] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.724 [2024-07-13 13:35:55.241272] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:20.724 TLSTESTn1 00:24:20.724 13:35:55 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:20.988 Running I/O for 10 seconds... 00:24:30.945 00:24:30.945 Latency(us) 00:24:30.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.945 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:30.945 Verification LBA range: start 0x0 length 0x2000 00:24:30.945 TLSTESTn1 : 10.04 2662.93 10.40 0.00 0.00 47936.03 8349.77 65244.73 00:24:30.945 =================================================================================================================== 00:24:30.945 Total : 2662.93 10.40 0.00 0.00 47936.03 8349.77 65244.73 00:24:30.945 0 00:24:30.945 13:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:30.945 13:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 327987 00:24:30.945 13:36:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 327987 ']' 00:24:30.945 13:36:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 327987 00:24:30.945 13:36:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:30.945 13:36:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:30.945 13:36:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 327987 00:24:30.945 13:36:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:30.945 13:36:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:30.945 13:36:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 327987' 00:24:30.945 killing process with pid 327987 00:24:30.945 13:36:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 327987 00:24:30.945 Received shutdown signal, test time was about 10.000000 seconds 00:24:30.945 00:24:30.945 Latency(us) 00:24:30.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.945 =================================================================================================================== 00:24:30.945 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:30.945 [2024-07-13 13:36:05.559692] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:30.945 13:36:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 327987 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.uVXmLoEiac 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uVXmLoEiac 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uVXmLoEiac 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uVXmLoEiac 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uVXmLoEiac' 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=329543 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 329543 /var/tmp/bdevperf.sock 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 329543 ']' 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:31.879 13:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.879 [2024-07-13 13:36:06.605277] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:31.879 [2024-07-13 13:36:06.605431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329543 ] 00:24:32.136 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.136 [2024-07-13 13:36:06.727103] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.393 [2024-07-13 13:36:06.949575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.959 13:36:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:32.959 13:36:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:32.959 13:36:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uVXmLoEiac 00:24:33.217 [2024-07-13 13:36:07.799615] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.217 [2024-07-13 13:36:07.799705] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:33.217 [2024-07-13 13:36:07.799728] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.uVXmLoEiac 00:24:33.217 request: 00:24:33.217 { 00:24:33.217 "name": "TLSTEST", 00:24:33.217 "trtype": "tcp", 00:24:33.217 "traddr": "10.0.0.2", 00:24:33.217 "adrfam": "ipv4", 00:24:33.217 "trsvcid": "4420", 00:24:33.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:33.217 "prchk_reftag": false, 00:24:33.217 "prchk_guard": false, 00:24:33.217 "hdgst": false, 00:24:33.217 "ddgst": false, 00:24:33.217 "psk": "/tmp/tmp.uVXmLoEiac", 00:24:33.217 "method": "bdev_nvme_attach_controller", 00:24:33.217 "req_id": 1 00:24:33.217 } 00:24:33.217 Got JSON-RPC error response 00:24:33.217 response: 00:24:33.217 { 00:24:33.217 "code": -1, 00:24:33.217 "message": "Operation not permitted" 00:24:33.217 } 00:24:33.217 13:36:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 329543 00:24:33.217 13:36:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 329543 ']' 00:24:33.217 13:36:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 329543 00:24:33.217 13:36:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:33.217 13:36:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:33.217 13:36:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 329543 00:24:33.217 13:36:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:33.217 13:36:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:33.217 13:36:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 329543' 00:24:33.217 killing process with pid 329543 00:24:33.217 13:36:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 329543 00:24:33.217 Received shutdown signal, test time was about 10.000000 seconds 00:24:33.217 00:24:33.217 Latency(us) 00:24:33.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.217 =================================================================================================================== 00:24:33.217 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:33.217 13:36:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 329543 00:24:34.151 13:36:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:34.151 13:36:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:34.151 13:36:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:34.151 13:36:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:34.151 13:36:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:34.151 13:36:08 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 327571 00:24:34.151 13:36:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 327571 ']' 00:24:34.151 13:36:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 327571 00:24:34.151 13:36:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:34.151 13:36:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:34.151 13:36:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 327571 00:24:34.151 13:36:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:34.151 13:36:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:34.151 13:36:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 327571' 00:24:34.151 killing process with pid 327571 00:24:34.151 13:36:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 327571 00:24:34.151 [2024-07-13 13:36:08.798588] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:34.151 13:36:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 327571 00:24:35.525 13:36:10 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:24:35.525 13:36:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:35.525 13:36:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:35.525 13:36:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.525 13:36:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=330466 00:24:35.525 13:36:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:35.525 13:36:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 330466 00:24:35.525 13:36:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 330466 ']' 00:24:35.525 13:36:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.525 13:36:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.525 13:36:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.525 13:36:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.525 13:36:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.784 [2024-07-13 13:36:10.310562] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:35.784 [2024-07-13 13:36:10.310702] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.784 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.784 [2024-07-13 13:36:10.447533] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.042 [2024-07-13 13:36:10.688794] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.042 [2024-07-13 13:36:10.688884] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.042 [2024-07-13 13:36:10.688927] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.042 [2024-07-13 13:36:10.688950] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.042 [2024-07-13 13:36:10.688969] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.042 [2024-07-13 13:36:10.689035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.608 13:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.608 13:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:36.608 13:36:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:36.608 13:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:36.608 13:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.608 13:36:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.608 13:36:11 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.uVXmLoEiac 00:24:36.608 13:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:36.608 13:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.uVXmLoEiac 00:24:36.608 13:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:24:36.608 13:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:36.608 13:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:24:36.608 13:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:36.608 13:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.uVXmLoEiac 00:24:36.608 13:36:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.uVXmLoEiac 00:24:36.608 13:36:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:36.866 [2024-07-13 13:36:11.538938] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.866 13:36:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:37.123 13:36:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:37.381 [2024-07-13 13:36:12.076419] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:37.381 [2024-07-13 13:36:12.076717] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.381 13:36:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:37.946 malloc0 00:24:37.946 13:36:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:38.204 13:36:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uVXmLoEiac 00:24:38.204 [2024-07-13 13:36:12.914155] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:38.204 [2024-07-13 13:36:12.914212] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:38.204 [2024-07-13 13:36:12.914257] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:38.204 request: 00:24:38.204 { 00:24:38.204 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.204 "host": "nqn.2016-06.io.spdk:host1", 00:24:38.204 "psk": "/tmp/tmp.uVXmLoEiac", 00:24:38.204 "method": "nvmf_subsystem_add_host", 00:24:38.204 "req_id": 1 00:24:38.204 } 00:24:38.204 Got JSON-RPC error response 00:24:38.204 response: 00:24:38.204 { 00:24:38.204 "code": -32603, 00:24:38.204 "message": "Internal error" 00:24:38.204 } 00:24:38.204 13:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:38.204 13:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:38.204 13:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:38.204 13:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:38.204 13:36:12 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 330466 00:24:38.204 13:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 330466 ']' 00:24:38.204 13:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 330466 00:24:38.204 13:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:38.204 13:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:38.204 13:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 330466 00:24:38.462 13:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:38.462 13:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:38.462 13:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 330466' 00:24:38.462 killing process with pid 330466 00:24:38.462 13:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 330466 00:24:38.462 13:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 330466 00:24:39.834 13:36:14 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.uVXmLoEiac 00:24:39.834 13:36:14 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:39.834 13:36:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:39.834 13:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:39.834 13:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.834 13:36:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=331021 00:24:39.834 13:36:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:39.834 13:36:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 331021 00:24:39.834 13:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 331021 ']' 00:24:39.834 13:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.834 13:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:39.834 13:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.834 13:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:39.834 13:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.834 [2024-07-13 13:36:14.463740] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:39.834 [2024-07-13 13:36:14.463897] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.834 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.092 [2024-07-13 13:36:14.595406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.092 [2024-07-13 13:36:14.817638] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.092 [2024-07-13 13:36:14.817719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.092 [2024-07-13 13:36:14.817747] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.092 [2024-07-13 13:36:14.817770] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.092 [2024-07-13 13:36:14.817788] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.092 [2024-07-13 13:36:14.817836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.026 13:36:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:41.026 13:36:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:41.026 13:36:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:41.026 13:36:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:41.026 13:36:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.026 13:36:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.026 13:36:15 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.uVXmLoEiac 00:24:41.026 13:36:15 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.uVXmLoEiac 00:24:41.026 13:36:15 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:41.026 [2024-07-13 13:36:15.660401] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.026 13:36:15 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:41.284 13:36:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:41.543 [2024-07-13 13:36:16.153741] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:41.543 [2024-07-13 13:36:16.154106] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.543 13:36:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:41.832 malloc0 00:24:41.832 13:36:16 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:42.089 13:36:16 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uVXmLoEiac 00:24:42.347 [2024-07-13 13:36:17.071014] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:42.347 13:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=331319 00:24:42.347 13:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:42.347 13:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:42.347 13:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 331319 /var/tmp/bdevperf.sock 00:24:42.347 13:36:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 331319 ']' 00:24:42.347 13:36:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:42.347 13:36:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:42.347 13:36:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:42.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:42.347 13:36:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:42.347 13:36:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.605 [2024-07-13 13:36:17.171751] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:42.605 [2024-07-13 13:36:17.171906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331319 ] 00:24:42.605 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.605 [2024-07-13 13:36:17.296026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.864 [2024-07-13 13:36:17.523519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.430 13:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:43.430 13:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:43.430 13:36:18 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uVXmLoEiac 00:24:43.688 [2024-07-13 13:36:18.340678] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:43.688 [2024-07-13 13:36:18.340884] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:43.688 TLSTESTn1 00:24:43.946 13:36:18 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:44.204 13:36:18 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:24:44.204 "subsystems": [ 00:24:44.204 { 00:24:44.204 "subsystem": "keyring", 00:24:44.204 "config": [] 00:24:44.204 }, 00:24:44.204 { 00:24:44.204 "subsystem": "iobuf", 00:24:44.204 "config": [ 00:24:44.204 { 00:24:44.204 "method": "iobuf_set_options", 00:24:44.204 "params": { 00:24:44.204 "small_pool_count": 8192, 00:24:44.204 "large_pool_count": 1024, 00:24:44.204 "small_bufsize": 8192, 00:24:44.204 "large_bufsize": 135168 00:24:44.204 } 00:24:44.204 } 00:24:44.204 ] 00:24:44.204 }, 00:24:44.204 { 00:24:44.204 "subsystem": "sock", 00:24:44.204 "config": [ 00:24:44.204 { 00:24:44.204 "method": "sock_set_default_impl", 00:24:44.204 "params": { 00:24:44.204 "impl_name": "posix" 00:24:44.204 } 00:24:44.204 }, 00:24:44.204 { 00:24:44.204 "method": "sock_impl_set_options", 00:24:44.204 "params": { 00:24:44.204 "impl_name": "ssl", 00:24:44.204 "recv_buf_size": 4096, 00:24:44.204 "send_buf_size": 4096, 00:24:44.204 "enable_recv_pipe": true, 00:24:44.204 "enable_quickack": false, 00:24:44.204 "enable_placement_id": 0, 00:24:44.204 "enable_zerocopy_send_server": true, 00:24:44.204 "enable_zerocopy_send_client": false, 00:24:44.204 "zerocopy_threshold": 0, 00:24:44.204 "tls_version": 0, 00:24:44.204 "enable_ktls": false 00:24:44.204 } 00:24:44.204 }, 00:24:44.204 { 00:24:44.204 "method": "sock_impl_set_options", 00:24:44.204 "params": { 00:24:44.204 "impl_name": "posix", 00:24:44.204 "recv_buf_size": 2097152, 00:24:44.204 "send_buf_size": 2097152, 00:24:44.204 "enable_recv_pipe": true, 00:24:44.204 "enable_quickack": false, 00:24:44.204 "enable_placement_id": 0, 00:24:44.204 "enable_zerocopy_send_server": true, 00:24:44.204 "enable_zerocopy_send_client": false, 00:24:44.204 "zerocopy_threshold": 0, 00:24:44.204 "tls_version": 0, 00:24:44.204 "enable_ktls": false 00:24:44.204 } 00:24:44.204 } 00:24:44.204 ] 00:24:44.204 }, 00:24:44.204 { 00:24:44.204 "subsystem": "vmd", 00:24:44.204 "config": [] 00:24:44.204 }, 00:24:44.204 { 00:24:44.204 "subsystem": "accel", 00:24:44.204 "config": [ 00:24:44.204 { 00:24:44.204 "method": "accel_set_options", 00:24:44.204 "params": { 00:24:44.204 "small_cache_size": 128, 00:24:44.204 "large_cache_size": 16, 00:24:44.204 "task_count": 2048, 00:24:44.204 "sequence_count": 2048, 00:24:44.204 "buf_count": 2048 00:24:44.204 } 00:24:44.204 } 00:24:44.204 ] 00:24:44.204 }, 00:24:44.204 { 00:24:44.204 "subsystem": "bdev", 00:24:44.204 "config": [ 00:24:44.204 { 00:24:44.204 "method": "bdev_set_options", 00:24:44.204 "params": { 00:24:44.204 "bdev_io_pool_size": 65535, 00:24:44.204 "bdev_io_cache_size": 256, 00:24:44.204 "bdev_auto_examine": true, 00:24:44.204 "iobuf_small_cache_size": 128, 00:24:44.204 "iobuf_large_cache_size": 16 00:24:44.204 } 00:24:44.204 }, 00:24:44.204 { 00:24:44.204 "method": "bdev_raid_set_options", 00:24:44.204 "params": { 00:24:44.204 "process_window_size_kb": 1024 00:24:44.204 } 00:24:44.204 }, 00:24:44.204 { 00:24:44.204 "method": "bdev_iscsi_set_options", 00:24:44.204 "params": { 00:24:44.204 "timeout_sec": 30 00:24:44.204 } 00:24:44.204 }, 00:24:44.204 { 00:24:44.204 "method": "bdev_nvme_set_options", 00:24:44.204 "params": { 00:24:44.204 "action_on_timeout": "none", 00:24:44.204 "timeout_us": 0, 00:24:44.204 "timeout_admin_us": 0, 00:24:44.204 "keep_alive_timeout_ms": 10000, 00:24:44.204 "arbitration_burst": 0, 00:24:44.204 "low_priority_weight": 0, 00:24:44.204 "medium_priority_weight": 0, 00:24:44.204 "high_priority_weight": 0, 00:24:44.204 "nvme_adminq_poll_period_us": 10000, 00:24:44.204 "nvme_ioq_poll_period_us": 0, 00:24:44.204 "io_queue_requests": 0, 00:24:44.204 "delay_cmd_submit": true, 00:24:44.204 "transport_retry_count": 4, 00:24:44.204 "bdev_retry_count": 3, 00:24:44.204 "transport_ack_timeout": 0, 00:24:44.204 "ctrlr_loss_timeout_sec": 0, 00:24:44.204 "reconnect_delay_sec": 0, 00:24:44.204 "fast_io_fail_timeout_sec": 0, 00:24:44.204 "disable_auto_failback": false, 00:24:44.204 "generate_uuids": false, 00:24:44.204 "transport_tos": 0, 00:24:44.204 "nvme_error_stat": false, 00:24:44.204 "rdma_srq_size": 0, 00:24:44.204 "io_path_stat": false, 00:24:44.204 "allow_accel_sequence": false, 00:24:44.204 "rdma_max_cq_size": 0, 00:24:44.204 "rdma_cm_event_timeout_ms": 0, 00:24:44.204 "dhchap_digests": [ 00:24:44.204 "sha256", 00:24:44.204 "sha384", 00:24:44.204 "sha512" 00:24:44.204 ], 00:24:44.204 "dhchap_dhgroups": [ 00:24:44.204 "null", 00:24:44.204 "ffdhe2048", 00:24:44.204 "ffdhe3072", 00:24:44.204 "ffdhe4096", 00:24:44.204 "ffdhe6144", 00:24:44.204 "ffdhe8192" 00:24:44.204 ] 00:24:44.204 } 00:24:44.204 }, 00:24:44.204 { 00:24:44.204 "method": "bdev_nvme_set_hotplug", 00:24:44.204 "params": { 00:24:44.204 "period_us": 100000, 00:24:44.204 "enable": false 00:24:44.204 } 00:24:44.204 }, 00:24:44.204 { 00:24:44.204 "method": "bdev_malloc_create", 00:24:44.204 "params": { 00:24:44.204 "name": "malloc0", 00:24:44.204 "num_blocks": 8192, 00:24:44.204 "block_size": 4096, 00:24:44.204 "physical_block_size": 4096, 00:24:44.204 "uuid": "273ded98-ed76-4839-96df-db87ef6374dc", 00:24:44.204 "optimal_io_boundary": 0 00:24:44.204 } 00:24:44.204 }, 00:24:44.204 { 00:24:44.204 "method": "bdev_wait_for_examine" 00:24:44.204 } 00:24:44.204 ] 00:24:44.204 }, 00:24:44.204 { 00:24:44.204 "subsystem": "nbd", 00:24:44.204 "config": [] 00:24:44.204 }, 00:24:44.204 { 00:24:44.204 "subsystem": "scheduler", 00:24:44.204 "config": [ 00:24:44.204 { 00:24:44.204 "method": "framework_set_scheduler", 00:24:44.204 "params": { 00:24:44.204 "name": "static" 00:24:44.204 } 00:24:44.204 } 00:24:44.204 ] 00:24:44.204 }, 00:24:44.204 { 00:24:44.204 "subsystem": "nvmf", 00:24:44.204 "config": [ 00:24:44.204 { 00:24:44.204 "method": "nvmf_set_config", 00:24:44.205 "params": { 00:24:44.205 "discovery_filter": "match_any", 00:24:44.205 "admin_cmd_passthru": { 00:24:44.205 "identify_ctrlr": false 00:24:44.205 } 00:24:44.205 } 00:24:44.205 }, 00:24:44.205 { 00:24:44.205 "method": "nvmf_set_max_subsystems", 00:24:44.205 "params": { 00:24:44.205 "max_subsystems": 1024 00:24:44.205 } 00:24:44.205 }, 00:24:44.205 { 00:24:44.205 "method": "nvmf_set_crdt", 00:24:44.205 "params": { 00:24:44.205 "crdt1": 0, 00:24:44.205 "crdt2": 0, 00:24:44.205 "crdt3": 0 00:24:44.205 } 00:24:44.205 }, 00:24:44.205 { 00:24:44.205 "method": "nvmf_create_transport", 00:24:44.205 "params": { 00:24:44.205 "trtype": "TCP", 00:24:44.205 "max_queue_depth": 128, 00:24:44.205 "max_io_qpairs_per_ctrlr": 127, 00:24:44.205 "in_capsule_data_size": 4096, 00:24:44.205 "max_io_size": 131072, 00:24:44.205 "io_unit_size": 131072, 00:24:44.205 "max_aq_depth": 128, 00:24:44.205 "num_shared_buffers": 511, 00:24:44.205 "buf_cache_size": 4294967295, 00:24:44.205 "dif_insert_or_strip": false, 00:24:44.205 "zcopy": false, 00:24:44.205 "c2h_success": false, 00:24:44.205 "sock_priority": 0, 00:24:44.205 "abort_timeout_sec": 1, 00:24:44.205 "ack_timeout": 0, 00:24:44.205 "data_wr_pool_size": 0 00:24:44.205 } 00:24:44.205 }, 00:24:44.205 { 00:24:44.205 "method": "nvmf_create_subsystem", 00:24:44.205 "params": { 00:24:44.205 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.205 "allow_any_host": false, 00:24:44.205 "serial_number": "SPDK00000000000001", 00:24:44.205 "model_number": "SPDK bdev Controller", 00:24:44.205 "max_namespaces": 10, 00:24:44.205 "min_cntlid": 1, 00:24:44.205 "max_cntlid": 65519, 00:24:44.205 "ana_reporting": false 00:24:44.205 } 00:24:44.205 }, 00:24:44.205 { 00:24:44.205 "method": "nvmf_subsystem_add_host", 00:24:44.205 "params": { 00:24:44.205 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.205 "host": "nqn.2016-06.io.spdk:host1", 00:24:44.205 "psk": "/tmp/tmp.uVXmLoEiac" 00:24:44.205 } 00:24:44.205 }, 00:24:44.205 { 00:24:44.205 "method": "nvmf_subsystem_add_ns", 00:24:44.205 "params": { 00:24:44.205 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.205 "namespace": { 00:24:44.205 "nsid": 1, 00:24:44.205 "bdev_name": "malloc0", 00:24:44.205 "nguid": "273DED98ED76483996DFDB87EF6374DC", 00:24:44.205 "uuid": "273ded98-ed76-4839-96df-db87ef6374dc", 00:24:44.205 "no_auto_visible": false 00:24:44.205 } 00:24:44.205 } 00:24:44.205 }, 00:24:44.205 { 00:24:44.205 "method": "nvmf_subsystem_add_listener", 00:24:44.205 "params": { 00:24:44.205 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.205 "listen_address": { 00:24:44.205 "trtype": "TCP", 00:24:44.205 "adrfam": "IPv4", 00:24:44.205 "traddr": "10.0.0.2", 00:24:44.205 "trsvcid": "4420" 00:24:44.205 }, 00:24:44.205 "secure_channel": true 00:24:44.205 } 00:24:44.205 } 00:24:44.205 ] 00:24:44.205 } 00:24:44.205 ] 00:24:44.205 }' 00:24:44.205 13:36:18 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:44.463 13:36:19 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:44.463 "subsystems": [ 00:24:44.463 { 00:24:44.463 "subsystem": "keyring", 00:24:44.463 "config": [] 00:24:44.463 }, 00:24:44.463 { 00:24:44.463 "subsystem": "iobuf", 00:24:44.463 "config": [ 00:24:44.463 { 00:24:44.463 "method": "iobuf_set_options", 00:24:44.463 "params": { 00:24:44.463 "small_pool_count": 8192, 00:24:44.463 "large_pool_count": 1024, 00:24:44.463 "small_bufsize": 8192, 00:24:44.463 "large_bufsize": 135168 00:24:44.463 } 00:24:44.463 } 00:24:44.463 ] 00:24:44.463 }, 00:24:44.463 { 00:24:44.463 "subsystem": "sock", 00:24:44.463 "config": [ 00:24:44.463 { 00:24:44.463 "method": "sock_set_default_impl", 00:24:44.463 "params": { 00:24:44.463 "impl_name": "posix" 00:24:44.463 } 00:24:44.463 }, 00:24:44.463 { 00:24:44.463 "method": "sock_impl_set_options", 00:24:44.463 "params": { 00:24:44.463 "impl_name": "ssl", 00:24:44.463 "recv_buf_size": 4096, 00:24:44.463 "send_buf_size": 4096, 00:24:44.463 "enable_recv_pipe": true, 00:24:44.463 "enable_quickack": false, 00:24:44.463 "enable_placement_id": 0, 00:24:44.463 "enable_zerocopy_send_server": true, 00:24:44.463 "enable_zerocopy_send_client": false, 00:24:44.463 "zerocopy_threshold": 0, 00:24:44.463 "tls_version": 0, 00:24:44.463 "enable_ktls": false 00:24:44.463 } 00:24:44.463 }, 00:24:44.463 { 00:24:44.463 "method": "sock_impl_set_options", 00:24:44.463 "params": { 00:24:44.463 "impl_name": "posix", 00:24:44.463 "recv_buf_size": 2097152, 00:24:44.463 "send_buf_size": 2097152, 00:24:44.463 "enable_recv_pipe": true, 00:24:44.463 "enable_quickack": false, 00:24:44.463 "enable_placement_id": 0, 00:24:44.463 "enable_zerocopy_send_server": true, 00:24:44.463 "enable_zerocopy_send_client": false, 00:24:44.463 "zerocopy_threshold": 0, 00:24:44.463 "tls_version": 0, 00:24:44.463 "enable_ktls": false 00:24:44.463 } 00:24:44.463 } 00:24:44.463 ] 00:24:44.463 }, 00:24:44.463 { 00:24:44.463 "subsystem": "vmd", 00:24:44.463 "config": [] 00:24:44.463 }, 00:24:44.463 { 00:24:44.463 "subsystem": "accel", 00:24:44.463 "config": [ 00:24:44.463 { 00:24:44.463 "method": "accel_set_options", 00:24:44.463 "params": { 00:24:44.463 "small_cache_size": 128, 00:24:44.463 "large_cache_size": 16, 00:24:44.463 "task_count": 2048, 00:24:44.463 "sequence_count": 2048, 00:24:44.463 "buf_count": 2048 00:24:44.463 } 00:24:44.463 } 00:24:44.463 ] 00:24:44.463 }, 00:24:44.463 { 00:24:44.463 "subsystem": "bdev", 00:24:44.463 "config": [ 00:24:44.463 { 00:24:44.463 "method": "bdev_set_options", 00:24:44.463 "params": { 00:24:44.463 "bdev_io_pool_size": 65535, 00:24:44.463 "bdev_io_cache_size": 256, 00:24:44.463 "bdev_auto_examine": true, 00:24:44.463 "iobuf_small_cache_size": 128, 00:24:44.463 "iobuf_large_cache_size": 16 00:24:44.463 } 00:24:44.463 }, 00:24:44.463 { 00:24:44.463 "method": "bdev_raid_set_options", 00:24:44.463 "params": { 00:24:44.463 "process_window_size_kb": 1024 00:24:44.463 } 00:24:44.463 }, 00:24:44.463 { 00:24:44.463 "method": "bdev_iscsi_set_options", 00:24:44.463 "params": { 00:24:44.463 "timeout_sec": 30 00:24:44.464 } 00:24:44.464 }, 00:24:44.464 { 00:24:44.464 "method": "bdev_nvme_set_options", 00:24:44.464 "params": { 00:24:44.464 "action_on_timeout": "none", 00:24:44.464 "timeout_us": 0, 00:24:44.464 "timeout_admin_us": 0, 00:24:44.464 "keep_alive_timeout_ms": 10000, 00:24:44.464 "arbitration_burst": 0, 00:24:44.464 "low_priority_weight": 0, 00:24:44.464 "medium_priority_weight": 0, 00:24:44.464 "high_priority_weight": 0, 00:24:44.464 "nvme_adminq_poll_period_us": 10000, 00:24:44.464 "nvme_ioq_poll_period_us": 0, 00:24:44.464 "io_queue_requests": 512, 00:24:44.464 "delay_cmd_submit": true, 00:24:44.464 "transport_retry_count": 4, 00:24:44.464 "bdev_retry_count": 3, 00:24:44.464 "transport_ack_timeout": 0, 00:24:44.464 "ctrlr_loss_timeout_sec": 0, 00:24:44.464 "reconnect_delay_sec": 0, 00:24:44.464 "fast_io_fail_timeout_sec": 0, 00:24:44.464 "disable_auto_failback": false, 00:24:44.464 "generate_uuids": false, 00:24:44.464 "transport_tos": 0, 00:24:44.464 "nvme_error_stat": false, 00:24:44.464 "rdma_srq_size": 0, 00:24:44.464 "io_path_stat": false, 00:24:44.464 "allow_accel_sequence": false, 00:24:44.464 "rdma_max_cq_size": 0, 00:24:44.464 "rdma_cm_event_timeout_ms": 0, 00:24:44.464 "dhchap_digests": [ 00:24:44.464 "sha256", 00:24:44.464 "sha384", 00:24:44.464 "sha512" 00:24:44.464 ], 00:24:44.464 "dhchap_dhgroups": [ 00:24:44.464 "null", 00:24:44.464 "ffdhe2048", 00:24:44.464 "ffdhe3072", 00:24:44.464 "ffdhe4096", 00:24:44.464 "ffdhe6144", 00:24:44.464 "ffdhe8192" 00:24:44.464 ] 00:24:44.464 } 00:24:44.464 }, 00:24:44.464 { 00:24:44.464 "method": "bdev_nvme_attach_controller", 00:24:44.464 "params": { 00:24:44.464 "name": "TLSTEST", 00:24:44.464 "trtype": "TCP", 00:24:44.464 "adrfam": "IPv4", 00:24:44.464 "traddr": "10.0.0.2", 00:24:44.464 "trsvcid": "4420", 00:24:44.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.464 "prchk_reftag": false, 00:24:44.464 "prchk_guard": false, 00:24:44.464 "ctrlr_loss_timeout_sec": 0, 00:24:44.464 "reconnect_delay_sec": 0, 00:24:44.464 "fast_io_fail_timeout_sec": 0, 00:24:44.464 "psk": "/tmp/tmp.uVXmLoEiac", 00:24:44.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:44.464 "hdgst": false, 00:24:44.464 "ddgst": false 00:24:44.464 } 00:24:44.464 }, 00:24:44.464 { 00:24:44.464 "method": "bdev_nvme_set_hotplug", 00:24:44.464 "params": { 00:24:44.464 "period_us": 100000, 00:24:44.464 "enable": false 00:24:44.464 } 00:24:44.464 }, 00:24:44.464 { 00:24:44.464 "method": "bdev_wait_for_examine" 00:24:44.464 } 00:24:44.464 ] 00:24:44.464 }, 00:24:44.464 { 00:24:44.464 "subsystem": "nbd", 00:24:44.464 "config": [] 00:24:44.464 } 00:24:44.464 ] 00:24:44.464 }' 00:24:44.464 13:36:19 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 331319 00:24:44.464 13:36:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 331319 ']' 00:24:44.464 13:36:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 331319 00:24:44.464 13:36:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:44.464 13:36:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:44.464 13:36:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 331319 00:24:44.464 13:36:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:44.464 13:36:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:44.464 13:36:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 331319' 00:24:44.464 killing process with pid 331319 00:24:44.464 13:36:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 331319 00:24:44.464 Received shutdown signal, test time was about 10.000000 seconds 00:24:44.464 00:24:44.464 Latency(us) 00:24:44.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.464 =================================================================================================================== 00:24:44.464 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:44.464 [2024-07-13 13:36:19.097278] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:44.464 13:36:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 331319 00:24:45.393 13:36:20 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 331021 00:24:45.393 13:36:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 331021 ']' 00:24:45.393 13:36:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 331021 00:24:45.393 13:36:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:45.393 13:36:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:45.393 13:36:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 331021 00:24:45.393 13:36:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:45.393 13:36:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:45.393 13:36:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 331021' 00:24:45.393 killing process with pid 331021 00:24:45.393 13:36:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 331021 00:24:45.393 [2024-07-13 13:36:20.069743] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:45.393 13:36:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 331021 00:24:46.764 13:36:21 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:46.764 13:36:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:46.764 13:36:21 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:24:46.764 "subsystems": [ 00:24:46.764 { 00:24:46.764 "subsystem": "keyring", 00:24:46.764 "config": [] 00:24:46.764 }, 00:24:46.764 { 00:24:46.764 "subsystem": "iobuf", 00:24:46.764 "config": [ 00:24:46.764 { 00:24:46.764 "method": "iobuf_set_options", 00:24:46.764 "params": { 00:24:46.764 "small_pool_count": 8192, 00:24:46.764 "large_pool_count": 1024, 00:24:46.764 "small_bufsize": 8192, 00:24:46.764 "large_bufsize": 135168 00:24:46.764 } 00:24:46.764 } 00:24:46.764 ] 00:24:46.764 }, 00:24:46.764 { 00:24:46.764 "subsystem": "sock", 00:24:46.764 "config": [ 00:24:46.764 { 00:24:46.764 "method": "sock_set_default_impl", 00:24:46.764 "params": { 00:24:46.764 "impl_name": "posix" 00:24:46.764 } 00:24:46.764 }, 00:24:46.764 { 00:24:46.764 "method": "sock_impl_set_options", 00:24:46.764 "params": { 00:24:46.764 "impl_name": "ssl", 00:24:46.764 "recv_buf_size": 4096, 00:24:46.764 "send_buf_size": 4096, 00:24:46.764 "enable_recv_pipe": true, 00:24:46.764 "enable_quickack": false, 00:24:46.765 "enable_placement_id": 0, 00:24:46.765 "enable_zerocopy_send_server": true, 00:24:46.765 "enable_zerocopy_send_client": false, 00:24:46.765 "zerocopy_threshold": 0, 00:24:46.765 "tls_version": 0, 00:24:46.765 "enable_ktls": false 00:24:46.765 } 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "method": "sock_impl_set_options", 00:24:46.765 "params": { 00:24:46.765 "impl_name": "posix", 00:24:46.765 "recv_buf_size": 2097152, 00:24:46.765 "send_buf_size": 2097152, 00:24:46.765 "enable_recv_pipe": true, 00:24:46.765 "enable_quickack": false, 00:24:46.765 "enable_placement_id": 0, 00:24:46.765 "enable_zerocopy_send_server": true, 00:24:46.765 "enable_zerocopy_send_client": false, 00:24:46.765 "zerocopy_threshold": 0, 00:24:46.765 "tls_version": 0, 00:24:46.765 "enable_ktls": false 00:24:46.765 } 00:24:46.765 } 00:24:46.765 ] 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "subsystem": "vmd", 00:24:46.765 "config": [] 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "subsystem": "accel", 00:24:46.765 "config": [ 00:24:46.765 { 00:24:46.765 "method": "accel_set_options", 00:24:46.765 "params": { 00:24:46.765 "small_cache_size": 128, 00:24:46.765 "large_cache_size": 16, 00:24:46.765 "task_count": 2048, 00:24:46.765 "sequence_count": 2048, 00:24:46.765 "buf_count": 2048 00:24:46.765 } 00:24:46.765 } 00:24:46.765 ] 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "subsystem": "bdev", 00:24:46.765 "config": [ 00:24:46.765 { 00:24:46.765 "method": "bdev_set_options", 00:24:46.765 "params": { 00:24:46.765 "bdev_io_pool_size": 65535, 00:24:46.765 "bdev_io_cache_size": 256, 00:24:46.765 "bdev_auto_examine": true, 00:24:46.765 "iobuf_small_cache_size": 128, 00:24:46.765 "iobuf_large_cache_size": 16 00:24:46.765 } 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "method": "bdev_raid_set_options", 00:24:46.765 "params": { 00:24:46.765 "process_window_size_kb": 1024 00:24:46.765 } 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "method": "bdev_iscsi_set_options", 00:24:46.765 "params": { 00:24:46.765 "timeout_sec": 30 00:24:46.765 } 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "method": "bdev_nvme_set_options", 00:24:46.765 "params": { 00:24:46.765 "action_on_timeout": "none", 00:24:46.765 "timeout_us": 0, 00:24:46.765 "timeout_admin_us": 0, 00:24:46.765 "keep_alive_timeout_ms": 10000, 00:24:46.765 "arbitration_burst": 0, 00:24:46.765 "low_priority_weight": 0, 00:24:46.765 "medium_priority_weight": 0, 00:24:46.765 "high_priority_weight": 0, 00:24:46.765 "nvme_adminq_poll_period_us": 10000, 00:24:46.765 "nvme_ioq_poll_period_us": 0, 00:24:46.765 "io_queue_requests": 0, 00:24:46.765 "delay_cmd_submit": true, 00:24:46.765 "transport_retry_count": 4, 00:24:46.765 "bdev_retry_count": 3, 00:24:46.765 "transport_ack_timeout": 0, 00:24:46.765 "ctrlr_loss_timeout_sec": 0, 00:24:46.765 "reconnect_delay_sec": 0, 00:24:46.765 "fast_io_fail_timeout_sec": 0, 00:24:46.765 "disable_auto_failback": false, 00:24:46.765 "generate_uuids": false, 00:24:46.765 "transport_tos": 0, 00:24:46.765 "nvme_error_stat": false, 00:24:46.765 "rdma_srq_size": 0, 00:24:46.765 "io_path_stat": false, 00:24:46.765 "allow_accel_sequence": false, 00:24:46.765 "rdma_max_cq_size": 0, 00:24:46.765 "rdma_cm_event_timeout_ms": 0, 00:24:46.765 "dhchap_digests": [ 00:24:46.765 "sha256", 00:24:46.765 "sha384", 00:24:46.765 "sha512" 00:24:46.765 ], 00:24:46.765 "dhchap_dhgroups": [ 00:24:46.765 "null", 00:24:46.765 "ffdhe2048", 00:24:46.765 "ffdhe3072", 00:24:46.765 "ffdhe4096", 00:24:46.765 "ffdhe6144", 00:24:46.765 "ffdhe8192" 00:24:46.765 ] 00:24:46.765 } 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "method": "bdev_nvme_set_hotplug", 00:24:46.765 "params": { 00:24:46.765 "period_us": 100000, 00:24:46.765 "enable": false 00:24:46.765 } 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "method": "bdev_malloc_create", 00:24:46.765 "params": { 00:24:46.765 "name": "malloc0", 00:24:46.765 "num_blocks": 8192, 00:24:46.765 "block_size": 4096, 00:24:46.765 "physical_block_size": 4096, 00:24:46.765 "uuid": "273ded98-ed76-4839-96df-db87ef6374dc", 00:24:46.765 "optimal_io_boundary": 0 00:24:46.765 } 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "method": "bdev_wait_for_examine" 00:24:46.765 } 00:24:46.765 ] 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "subsystem": "nbd", 00:24:46.765 "config": [] 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "subsystem": "scheduler", 00:24:46.765 "config": [ 00:24:46.765 { 00:24:46.765 "method": "framework_set_scheduler", 00:24:46.765 "params": { 00:24:46.765 "name": "static" 00:24:46.765 } 00:24:46.765 } 00:24:46.765 ] 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "subsystem": "nvmf", 00:24:46.765 "config": [ 00:24:46.765 { 00:24:46.765 "method": "nvmf_set_config", 00:24:46.765 "params": { 00:24:46.765 "discovery_filter": "match_any", 00:24:46.765 "admin_cmd_passthru": { 00:24:46.765 "identify_ctrlr": false 00:24:46.765 } 00:24:46.765 } 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "method": "nvmf_set_max_subsystems", 00:24:46.765 "params": { 00:24:46.765 "max_subsystems": 1024 00:24:46.765 } 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "method": "nvmf_set_crdt", 00:24:46.765 "params": { 00:24:46.765 "crdt1": 0, 00:24:46.765 "crdt2": 0, 00:24:46.765 "crdt3": 0 00:24:46.765 } 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "method": "nvmf_create_transport", 00:24:46.765 "params": { 00:24:46.765 "trtype": "TCP", 00:24:46.765 "max_queue_depth": 128, 00:24:46.765 "max_io_qpairs_per_ctrlr": 127, 00:24:46.765 "in_capsule_data_size": 4096, 00:24:46.765 "max_io_size": 131072, 00:24:46.765 "io_unit_size": 131072, 00:24:46.765 "max_aq_depth": 128, 00:24:46.765 "num_shared_buffers": 511, 00:24:46.765 "buf_cache_size": 4294967295, 00:24:46.765 "dif_insert_or_strip": false, 00:24:46.765 "zcopy": false, 00:24:46.765 "c2h_success": false, 00:24:46.765 "sock_priority": 0, 00:24:46.765 "abort_timeout_sec": 1, 00:24:46.765 "ack_timeout": 0, 00:24:46.765 "data_wr_pool_size": 0 00:24:46.765 } 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "method": "nvmf_create_subsystem", 00:24:46.765 "params": { 00:24:46.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.765 "allow_any_host": false, 00:24:46.765 "serial_number": "SPDK00000000000001", 00:24:46.765 "model_number": "SPDK bdev Controller", 00:24:46.765 "max_namespaces": 10, 00:24:46.765 "min_cntlid": 1, 00:24:46.765 "max_cntlid": 65519, 00:24:46.765 "ana_reporting": false 00:24:46.765 } 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "method": "nvmf_subsystem_add_host", 00:24:46.765 "params": { 00:24:46.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.765 "host": "nqn.2016-06.io.spdk:host1", 00:24:46.765 "psk": "/tmp/tmp.uVXmLoEiac" 00:24:46.765 } 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "method": "nvmf_subsystem_add_ns", 00:24:46.765 "params": { 00:24:46.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.765 "namespace": { 00:24:46.765 "nsid": 1, 00:24:46.765 "bdev_name": "malloc0", 00:24:46.765 "nguid": "273DED98ED76483996DFDB87EF6374DC", 00:24:46.765 "uuid": "273ded98-ed76-4839-96df-db87ef6374dc", 00:24:46.765 "no_auto_visible": false 00:24:46.765 } 00:24:46.765 } 00:24:46.765 }, 00:24:46.765 { 00:24:46.765 "method": "nvmf_subsystem_add_listener", 00:24:46.765 "params": { 00:24:46.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.765 "listen_address": { 00:24:46.765 "trtype": "TCP", 00:24:46.765 "adrfam": "IPv4", 00:24:46.765 "traddr": "10.0.0.2", 00:24:46.765 "trsvcid": "4420" 00:24:46.765 }, 00:24:46.765 "secure_channel": true 00:24:46.765 } 00:24:46.765 } 00:24:46.765 ] 00:24:46.765 } 00:24:46.765 ] 00:24:46.765 }' 00:24:46.765 13:36:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:46.765 13:36:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.766 13:36:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=331857 00:24:46.766 13:36:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:46.766 13:36:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 331857 00:24:46.766 13:36:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 331857 ']' 00:24:46.766 13:36:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.766 13:36:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.766 13:36:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.766 13:36:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.766 13:36:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.766 [2024-07-13 13:36:21.502211] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:46.766 [2024-07-13 13:36:21.502364] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.024 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.024 [2024-07-13 13:36:21.644753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.282 [2024-07-13 13:36:21.897630] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.282 [2024-07-13 13:36:21.897710] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.282 [2024-07-13 13:36:21.897741] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.282 [2024-07-13 13:36:21.897767] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.282 [2024-07-13 13:36:21.897790] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.282 [2024-07-13 13:36:21.897942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.848 [2024-07-13 13:36:22.440550] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.848 [2024-07-13 13:36:22.456519] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:47.848 [2024-07-13 13:36:22.472548] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:47.848 [2024-07-13 13:36:22.472833] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.848 13:36:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:47.848 13:36:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:47.848 13:36:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:47.848 13:36:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:47.848 13:36:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.848 13:36:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.848 13:36:22 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=332012 00:24:47.848 13:36:22 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 332012 /var/tmp/bdevperf.sock 00:24:47.848 13:36:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 332012 ']' 00:24:47.848 13:36:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.848 13:36:22 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:47.848 13:36:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:47.848 13:36:22 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:24:47.848 "subsystems": [ 00:24:47.848 { 00:24:47.848 "subsystem": "keyring", 00:24:47.848 "config": [] 00:24:47.848 }, 00:24:47.848 { 00:24:47.848 "subsystem": "iobuf", 00:24:47.848 "config": [ 00:24:47.848 { 00:24:47.848 "method": "iobuf_set_options", 00:24:47.848 "params": { 00:24:47.848 "small_pool_count": 8192, 00:24:47.848 "large_pool_count": 1024, 00:24:47.848 "small_bufsize": 8192, 00:24:47.848 "large_bufsize": 135168 00:24:47.848 } 00:24:47.848 } 00:24:47.848 ] 00:24:47.848 }, 00:24:47.848 { 00:24:47.848 "subsystem": "sock", 00:24:47.848 "config": [ 00:24:47.848 { 00:24:47.848 "method": "sock_set_default_impl", 00:24:47.848 "params": { 00:24:47.848 "impl_name": "posix" 00:24:47.848 } 00:24:47.848 }, 00:24:47.848 { 00:24:47.848 "method": "sock_impl_set_options", 00:24:47.848 "params": { 00:24:47.848 "impl_name": "ssl", 00:24:47.848 "recv_buf_size": 4096, 00:24:47.848 "send_buf_size": 4096, 00:24:47.849 "enable_recv_pipe": true, 00:24:47.849 "enable_quickack": false, 00:24:47.849 "enable_placement_id": 0, 00:24:47.849 "enable_zerocopy_send_server": true, 00:24:47.849 "enable_zerocopy_send_client": false, 00:24:47.849 "zerocopy_threshold": 0, 00:24:47.849 "tls_version": 0, 00:24:47.849 "enable_ktls": false 00:24:47.849 } 00:24:47.849 }, 00:24:47.849 { 00:24:47.849 "method": "sock_impl_set_options", 00:24:47.849 "params": { 00:24:47.849 "impl_name": "posix", 00:24:47.849 "recv_buf_size": 2097152, 00:24:47.849 "send_buf_size": 2097152, 00:24:47.849 "enable_recv_pipe": true, 00:24:47.849 "enable_quickack": false, 00:24:47.849 "enable_placement_id": 0, 00:24:47.849 "enable_zerocopy_send_server": true, 00:24:47.849 "enable_zerocopy_send_client": false, 00:24:47.849 "zerocopy_threshold": 0, 00:24:47.849 "tls_version": 0, 00:24:47.849 "enable_ktls": false 00:24:47.849 } 00:24:47.849 } 00:24:47.849 ] 00:24:47.849 }, 00:24:47.849 { 00:24:47.849 "subsystem": "vmd", 00:24:47.849 "config": [] 00:24:47.849 }, 00:24:47.849 { 00:24:47.849 "subsystem": "accel", 00:24:47.849 "config": [ 00:24:47.849 { 00:24:47.849 "method": "accel_set_options", 00:24:47.849 "params": { 00:24:47.849 "small_cache_size": 128, 00:24:47.849 "large_cache_size": 16, 00:24:47.849 "task_count": 2048, 00:24:47.849 "sequence_count": 2048, 00:24:47.849 "buf_count": 2048 00:24:47.849 } 00:24:47.849 } 00:24:47.849 ] 00:24:47.849 }, 00:24:47.849 { 00:24:47.849 "subsystem": "bdev", 00:24:47.849 "config": [ 00:24:47.849 { 00:24:47.849 "method": "bdev_set_options", 00:24:47.849 "params": { 00:24:47.849 "bdev_io_pool_size": 65535, 00:24:47.849 "bdev_io_cache_size": 256, 00:24:47.849 "bdev_auto_examine": true, 00:24:47.849 "iobuf_small_cache_size": 128, 00:24:47.849 "iobuf_large_cache_size": 16 00:24:47.849 } 00:24:47.849 }, 00:24:47.849 { 00:24:47.849 "method": "bdev_raid_set_options", 00:24:47.849 "params": { 00:24:47.849 "process_window_size_kb": 1024 00:24:47.849 } 00:24:47.849 }, 00:24:47.849 { 00:24:47.849 "method": "bdev_iscsi_set_options", 00:24:47.849 "params": { 00:24:47.849 "timeout_sec": 30 00:24:47.849 } 00:24:47.849 }, 00:24:47.849 { 00:24:47.849 "method": "bdev_nvme_set_options", 00:24:47.849 "params": { 00:24:47.849 "action_on_timeout": "none", 00:24:47.849 "timeout_us": 0, 00:24:47.849 "timeout_admin_us": 0, 00:24:47.849 "keep_alive_timeout_ms": 10000, 00:24:47.849 "arbitration_burst": 0, 00:24:47.849 "low_priority_weight": 0, 00:24:47.849 "medium_priority_weight": 0, 00:24:47.849 "high_priority_weight": 0, 00:24:47.849 "nvme_adminq_poll_period_us": 10000, 00:24:47.849 "nvme_ioq_poll_period_us": 0, 00:24:47.849 "io_queue_requests": 512, 00:24:47.849 "delay_cmd_submit": true, 00:24:47.849 "transport_retry_count": 4, 00:24:47.849 "bdev_retry_count": 3, 00:24:47.849 "transport_ack_timeout": 0, 00:24:47.849 "ctrlr_loss_timeout_sec": 0, 00:24:47.849 "reconnect_delay_sec": 0, 00:24:47.849 "fast_io_fail_timeout_sec": 0, 00:24:47.849 "disable_auto_failback": false, 00:24:47.849 "generate_uuids": false, 00:24:47.849 "transport_tos": 0, 00:24:47.849 "nvme_error_stat": false, 00:24:47.849 "rdma_srq_size": 0, 00:24:47.849 "io_path_stat": false, 00:24:47.849 "allow_accel_sequence": false, 00:24:47.849 "rdma_max_cq_size": 0, 00:24:47.849 "rdma_cm_event_timeout_ms": 0, 00:24:47.849 "dhchap_digests": [ 00:24:47.849 "sha256", 00:24:47.849 "sha384", 00:24:47.849 "sha512" 00:24:47.849 ], 00:24:47.849 "dhchap_dhgroups": [ 00:24:47.849 "null", 00:24:47.849 "ffdhe2048", 00:24:47.849 "ffdhe3072", 00:24:47.849 "ffdhe4096", 00:24:47.849 "ffdhe6144", 00:24:47.849 "ffdhe8192" 00:24:47.849 ] 00:24:47.849 } 00:24:47.849 }, 00:24:47.849 { 00:24:47.849 "method": "bdev_nvme_attach_controller", 00:24:47.849 "params": { 00:24:47.849 "name": "TLSTEST", 00:24:47.849 "trtype": "TCP", 00:24:47.849 "adrfam": "IPv4", 00:24:47.849 "traddr": "10.0.0.2", 00:24:47.849 "trsvcid": "4420", 00:24:47.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.849 "prchk_reftag": false, 00:24:47.849 "prchk_guard": false, 00:24:47.849 "ctrlr_loss_timeout_sec": 0, 00:24:47.849 "reconnect_delay_sec": 0, 00:24:47.849 "fast_io_fail_timeout_sec": 0, 00:24:47.849 "psk": "/tmp/tmp.uVXmLoEiac", 00:24:47.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:47.849 "hdgst": false, 00:24:47.849 "ddgst": false 00:24:47.849 } 00:24:47.849 }, 00:24:47.849 { 00:24:47.849 "method": "bdev_nvme_set_hotplug", 00:24:47.849 "params": { 00:24:47.849 "period_us": 100000, 00:24:47.849 "enable": false 00:24:47.849 } 00:24:47.849 }, 00:24:47.849 { 00:24:47.849 "method": "bdev_wait_for_examine" 00:24:47.849 } 00:24:47.849 ] 00:24:47.849 }, 00:24:47.849 { 00:24:47.849 "subsystem": "nbd", 00:24:47.849 "config": [] 00:24:47.849 } 00:24:47.849 ] 00:24:47.849 }' 00:24:47.849 13:36:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.849 13:36:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:47.849 13:36:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.130 [2024-07-13 13:36:22.605444] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:48.130 [2024-07-13 13:36:22.605580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332012 ] 00:24:48.130 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.130 [2024-07-13 13:36:22.728788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.388 [2024-07-13 13:36:22.949034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.645 [2024-07-13 13:36:23.334126] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:48.645 [2024-07-13 13:36:23.334312] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:48.903 13:36:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:48.903 13:36:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:48.903 13:36:23 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:49.160 Running I/O for 10 seconds... 00:24:59.124 00:24:59.124 Latency(us) 00:24:59.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.124 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:59.124 Verification LBA range: start 0x0 length 0x2000 00:24:59.124 TLSTESTn1 : 10.05 2603.62 10.17 0.00 0.00 49024.73 12427.57 80779.19 00:24:59.124 =================================================================================================================== 00:24:59.124 Total : 2603.62 10.17 0.00 0.00 49024.73 12427.57 80779.19 00:24:59.124 0 00:24:59.124 13:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:59.124 13:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 332012 00:24:59.124 13:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 332012 ']' 00:24:59.124 13:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 332012 00:24:59.124 13:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:59.124 13:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:59.124 13:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 332012 00:24:59.124 13:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:59.124 13:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:59.124 13:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 332012' 00:24:59.124 killing process with pid 332012 00:24:59.124 13:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 332012 00:24:59.124 Received shutdown signal, test time was about 10.000000 seconds 00:24:59.124 00:24:59.124 Latency(us) 00:24:59.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.124 =================================================================================================================== 00:24:59.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:59.124 [2024-07-13 13:36:33.767257] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:59.124 13:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 332012 00:25:00.057 13:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 331857 00:25:00.057 13:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 331857 ']' 00:25:00.057 13:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 331857 00:25:00.057 13:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:00.057 13:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:00.057 13:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 331857 00:25:00.057 13:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:00.057 13:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:00.057 13:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 331857' 00:25:00.057 killing process with pid 331857 00:25:00.057 13:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 331857 00:25:00.057 [2024-07-13 13:36:34.785305] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:00.057 13:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 331857 00:25:01.956 13:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:25:01.956 13:36:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:01.956 13:36:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:01.956 13:36:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:01.956 13:36:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=333598 00:25:01.956 13:36:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:01.956 13:36:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 333598 00:25:01.956 13:36:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 333598 ']' 00:25:01.956 13:36:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.956 13:36:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.956 13:36:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.956 13:36:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.956 13:36:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:01.956 [2024-07-13 13:36:36.360188] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:01.956 [2024-07-13 13:36:36.360344] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.956 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.956 [2024-07-13 13:36:36.514657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.213 [2024-07-13 13:36:36.770545] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.213 [2024-07-13 13:36:36.770617] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.213 [2024-07-13 13:36:36.770646] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.213 [2024-07-13 13:36:36.770671] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.213 [2024-07-13 13:36:36.770692] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.213 [2024-07-13 13:36:36.770741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.811 13:36:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.811 13:36:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:02.811 13:36:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:02.811 13:36:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:02.811 13:36:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:02.811 13:36:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.811 13:36:37 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.uVXmLoEiac 00:25:02.811 13:36:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.uVXmLoEiac 00:25:02.811 13:36:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:03.068 [2024-07-13 13:36:37.640092] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.068 13:36:37 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:03.325 13:36:37 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:03.582 [2024-07-13 13:36:38.225793] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:03.582 [2024-07-13 13:36:38.226103] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.582 13:36:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:03.839 malloc0 00:25:04.096 13:36:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:04.352 13:36:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uVXmLoEiac 00:25:04.352 [2024-07-13 13:36:39.093704] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:04.609 13:36:39 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=333903 00:25:04.609 13:36:39 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:04.609 13:36:39 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:04.609 13:36:39 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 333903 /var/tmp/bdevperf.sock 00:25:04.609 13:36:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 333903 ']' 00:25:04.609 13:36:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:04.609 13:36:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:04.609 13:36:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:04.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:04.609 13:36:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:04.609 13:36:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:04.609 [2024-07-13 13:36:39.189900] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:04.609 [2024-07-13 13:36:39.190057] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333903 ] 00:25:04.609 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.609 [2024-07-13 13:36:39.315549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.867 [2024-07-13 13:36:39.544929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.431 13:36:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:05.431 13:36:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:05.432 13:36:40 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uVXmLoEiac 00:25:05.688 13:36:40 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:05.945 [2024-07-13 13:36:40.630201] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:06.202 nvme0n1 00:25:06.202 13:36:40 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:06.202 Running I/O for 1 seconds... 00:25:07.574 00:25:07.574 Latency(us) 00:25:07.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.574 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:07.574 Verification LBA range: start 0x0 length 0x2000 00:25:07.574 nvme0n1 : 1.04 2480.98 9.69 0.00 0.00 50542.84 9029.40 80779.19 00:25:07.574 =================================================================================================================== 00:25:07.574 Total : 2480.98 9.69 0.00 0.00 50542.84 9029.40 80779.19 00:25:07.574 0 00:25:07.574 13:36:41 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 333903 00:25:07.574 13:36:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 333903 ']' 00:25:07.574 13:36:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 333903 00:25:07.574 13:36:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:07.574 13:36:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:07.574 13:36:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 333903 00:25:07.574 13:36:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:07.574 13:36:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:07.574 13:36:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 333903' 00:25:07.574 killing process with pid 333903 00:25:07.574 13:36:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 333903 00:25:07.574 Received shutdown signal, test time was about 1.000000 seconds 00:25:07.574 00:25:07.574 Latency(us) 00:25:07.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.574 =================================================================================================================== 00:25:07.574 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.574 13:36:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 333903 00:25:08.508 13:36:43 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 333598 00:25:08.508 13:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 333598 ']' 00:25:08.508 13:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 333598 00:25:08.508 13:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:08.508 13:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:08.508 13:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 333598 00:25:08.508 13:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:08.508 13:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:08.508 13:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 333598' 00:25:08.508 killing process with pid 333598 00:25:08.508 13:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 333598 00:25:08.508 [2024-07-13 13:36:43.051583] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for 13:36:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 333598 00:25:08.508 removal in v24.09 hit 1 times 00:25:09.880 13:36:44 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:25:09.880 13:36:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:09.880 13:36:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:09.880 13:36:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.880 13:36:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=334563 00:25:09.880 13:36:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:09.880 13:36:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 334563 00:25:09.880 13:36:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 334563 ']' 00:25:09.880 13:36:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.880 13:36:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:09.880 13:36:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.880 13:36:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:09.880 13:36:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.880 [2024-07-13 13:36:44.573063] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:09.880 [2024-07-13 13:36:44.573204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.138 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.138 [2024-07-13 13:36:44.708941] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.397 [2024-07-13 13:36:44.963954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.397 [2024-07-13 13:36:44.964020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.397 [2024-07-13 13:36:44.964060] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.397 [2024-07-13 13:36:44.964082] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.397 [2024-07-13 13:36:44.964100] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.397 [2024-07-13 13:36:44.964164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.963 13:36:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:10.963 13:36:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:10.963 13:36:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:10.963 13:36:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:10.963 13:36:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:10.963 13:36:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.963 13:36:45 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:25:10.963 13:36:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.963 13:36:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:10.963 [2024-07-13 13:36:45.555919] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.963 malloc0 00:25:10.963 [2024-07-13 13:36:45.637764] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:10.963 [2024-07-13 13:36:45.638101] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.963 13:36:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.963 13:36:45 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=334717 00:25:10.963 13:36:45 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:10.963 13:36:45 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 334717 /var/tmp/bdevperf.sock 00:25:10.964 13:36:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 334717 ']' 00:25:10.964 13:36:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:10.964 13:36:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:10.964 13:36:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:10.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:10.964 13:36:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:10.964 13:36:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:11.222 [2024-07-13 13:36:45.744252] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:11.222 [2024-07-13 13:36:45.744398] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334717 ] 00:25:11.222 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.222 [2024-07-13 13:36:45.873286] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.479 [2024-07-13 13:36:46.127613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.045 13:36:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:12.045 13:36:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:12.045 13:36:46 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uVXmLoEiac 00:25:12.302 13:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:12.560 [2024-07-13 13:36:47.232179] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:12.817 nvme0n1 00:25:12.817 13:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:12.817 Running I/O for 1 seconds... 00:25:14.190 00:25:14.190 Latency(us) 00:25:14.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.190 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:14.190 Verification LBA range: start 0x0 length 0x2000 00:25:14.190 nvme0n1 : 1.03 1060.56 4.14 0.00 0.00 118322.54 10243.03 108741.21 00:25:14.190 =================================================================================================================== 00:25:14.190 Total : 1060.56 4.14 0.00 0.00 118322.54 10243.03 108741.21 00:25:14.190 0 00:25:14.190 13:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:25:14.190 13:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.190 13:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:14.190 13:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.190 13:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:25:14.190 "subsystems": [ 00:25:14.190 { 00:25:14.190 "subsystem": "keyring", 00:25:14.190 "config": [ 00:25:14.190 { 00:25:14.190 "method": "keyring_file_add_key", 00:25:14.190 "params": { 00:25:14.190 "name": "key0", 00:25:14.190 "path": "/tmp/tmp.uVXmLoEiac" 00:25:14.190 } 00:25:14.190 } 00:25:14.190 ] 00:25:14.190 }, 00:25:14.190 { 00:25:14.190 "subsystem": "iobuf", 00:25:14.190 "config": [ 00:25:14.190 { 00:25:14.190 "method": "iobuf_set_options", 00:25:14.190 "params": { 00:25:14.190 "small_pool_count": 8192, 00:25:14.190 "large_pool_count": 1024, 00:25:14.190 "small_bufsize": 8192, 00:25:14.190 "large_bufsize": 135168 00:25:14.191 } 00:25:14.191 } 00:25:14.191 ] 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "subsystem": "sock", 00:25:14.191 "config": [ 00:25:14.191 { 00:25:14.191 "method": "sock_set_default_impl", 00:25:14.191 "params": { 00:25:14.191 "impl_name": "posix" 00:25:14.191 } 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "method": "sock_impl_set_options", 00:25:14.191 "params": { 00:25:14.191 "impl_name": "ssl", 00:25:14.191 "recv_buf_size": 4096, 00:25:14.191 "send_buf_size": 4096, 00:25:14.191 "enable_recv_pipe": true, 00:25:14.191 "enable_quickack": false, 00:25:14.191 "enable_placement_id": 0, 00:25:14.191 "enable_zerocopy_send_server": true, 00:25:14.191 "enable_zerocopy_send_client": false, 00:25:14.191 "zerocopy_threshold": 0, 00:25:14.191 "tls_version": 0, 00:25:14.191 "enable_ktls": false 00:25:14.191 } 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "method": "sock_impl_set_options", 00:25:14.191 "params": { 00:25:14.191 "impl_name": "posix", 00:25:14.191 "recv_buf_size": 2097152, 00:25:14.191 "send_buf_size": 2097152, 00:25:14.191 "enable_recv_pipe": true, 00:25:14.191 "enable_quickack": false, 00:25:14.191 "enable_placement_id": 0, 00:25:14.191 "enable_zerocopy_send_server": true, 00:25:14.191 "enable_zerocopy_send_client": false, 00:25:14.191 "zerocopy_threshold": 0, 00:25:14.191 "tls_version": 0, 00:25:14.191 "enable_ktls": false 00:25:14.191 } 00:25:14.191 } 00:25:14.191 ] 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "subsystem": "vmd", 00:25:14.191 "config": [] 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "subsystem": "accel", 00:25:14.191 "config": [ 00:25:14.191 { 00:25:14.191 "method": "accel_set_options", 00:25:14.191 "params": { 00:25:14.191 "small_cache_size": 128, 00:25:14.191 "large_cache_size": 16, 00:25:14.191 "task_count": 2048, 00:25:14.191 "sequence_count": 2048, 00:25:14.191 "buf_count": 2048 00:25:14.191 } 00:25:14.191 } 00:25:14.191 ] 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "subsystem": "bdev", 00:25:14.191 "config": [ 00:25:14.191 { 00:25:14.191 "method": "bdev_set_options", 00:25:14.191 "params": { 00:25:14.191 "bdev_io_pool_size": 65535, 00:25:14.191 "bdev_io_cache_size": 256, 00:25:14.191 "bdev_auto_examine": true, 00:25:14.191 "iobuf_small_cache_size": 128, 00:25:14.191 "iobuf_large_cache_size": 16 00:25:14.191 } 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "method": "bdev_raid_set_options", 00:25:14.191 "params": { 00:25:14.191 "process_window_size_kb": 1024 00:25:14.191 } 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "method": "bdev_iscsi_set_options", 00:25:14.191 "params": { 00:25:14.191 "timeout_sec": 30 00:25:14.191 } 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "method": "bdev_nvme_set_options", 00:25:14.191 "params": { 00:25:14.191 "action_on_timeout": "none", 00:25:14.191 "timeout_us": 0, 00:25:14.191 "timeout_admin_us": 0, 00:25:14.191 "keep_alive_timeout_ms": 10000, 00:25:14.191 "arbitration_burst": 0, 00:25:14.191 "low_priority_weight": 0, 00:25:14.191 "medium_priority_weight": 0, 00:25:14.191 "high_priority_weight": 0, 00:25:14.191 "nvme_adminq_poll_period_us": 10000, 00:25:14.191 "nvme_ioq_poll_period_us": 0, 00:25:14.191 "io_queue_requests": 0, 00:25:14.191 "delay_cmd_submit": true, 00:25:14.191 "transport_retry_count": 4, 00:25:14.191 "bdev_retry_count": 3, 00:25:14.191 "transport_ack_timeout": 0, 00:25:14.191 "ctrlr_loss_timeout_sec": 0, 00:25:14.191 "reconnect_delay_sec": 0, 00:25:14.191 "fast_io_fail_timeout_sec": 0, 00:25:14.191 "disable_auto_failback": false, 00:25:14.191 "generate_uuids": false, 00:25:14.191 "transport_tos": 0, 00:25:14.191 "nvme_error_stat": false, 00:25:14.191 "rdma_srq_size": 0, 00:25:14.191 "io_path_stat": false, 00:25:14.191 "allow_accel_sequence": false, 00:25:14.191 "rdma_max_cq_size": 0, 00:25:14.191 "rdma_cm_event_timeout_ms": 0, 00:25:14.191 "dhchap_digests": [ 00:25:14.191 "sha256", 00:25:14.191 "sha384", 00:25:14.191 "sha512" 00:25:14.191 ], 00:25:14.191 "dhchap_dhgroups": [ 00:25:14.191 "null", 00:25:14.191 "ffdhe2048", 00:25:14.191 "ffdhe3072", 00:25:14.191 "ffdhe4096", 00:25:14.191 "ffdhe6144", 00:25:14.191 "ffdhe8192" 00:25:14.191 ] 00:25:14.191 } 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "method": "bdev_nvme_set_hotplug", 00:25:14.191 "params": { 00:25:14.191 "period_us": 100000, 00:25:14.191 "enable": false 00:25:14.191 } 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "method": "bdev_malloc_create", 00:25:14.191 "params": { 00:25:14.191 "name": "malloc0", 00:25:14.191 "num_blocks": 8192, 00:25:14.191 "block_size": 4096, 00:25:14.191 "physical_block_size": 4096, 00:25:14.191 "uuid": "8bca3363-6e24-4bf7-b46b-abf840635a8e", 00:25:14.191 "optimal_io_boundary": 0 00:25:14.191 } 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "method": "bdev_wait_for_examine" 00:25:14.191 } 00:25:14.191 ] 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "subsystem": "nbd", 00:25:14.191 "config": [] 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "subsystem": "scheduler", 00:25:14.191 "config": [ 00:25:14.191 { 00:25:14.191 "method": "framework_set_scheduler", 00:25:14.191 "params": { 00:25:14.191 "name": "static" 00:25:14.191 } 00:25:14.191 } 00:25:14.191 ] 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "subsystem": "nvmf", 00:25:14.191 "config": [ 00:25:14.191 { 00:25:14.191 "method": "nvmf_set_config", 00:25:14.191 "params": { 00:25:14.191 "discovery_filter": "match_any", 00:25:14.191 "admin_cmd_passthru": { 00:25:14.191 "identify_ctrlr": false 00:25:14.191 } 00:25:14.191 } 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "method": "nvmf_set_max_subsystems", 00:25:14.191 "params": { 00:25:14.191 "max_subsystems": 1024 00:25:14.191 } 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "method": "nvmf_set_crdt", 00:25:14.191 "params": { 00:25:14.191 "crdt1": 0, 00:25:14.191 "crdt2": 0, 00:25:14.191 "crdt3": 0 00:25:14.191 } 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "method": "nvmf_create_transport", 00:25:14.191 "params": { 00:25:14.191 "trtype": "TCP", 00:25:14.191 "max_queue_depth": 128, 00:25:14.191 "max_io_qpairs_per_ctrlr": 127, 00:25:14.191 "in_capsule_data_size": 4096, 00:25:14.191 "max_io_size": 131072, 00:25:14.191 "io_unit_size": 131072, 00:25:14.191 "max_aq_depth": 128, 00:25:14.191 "num_shared_buffers": 511, 00:25:14.191 "buf_cache_size": 4294967295, 00:25:14.191 "dif_insert_or_strip": false, 00:25:14.191 "zcopy": false, 00:25:14.191 "c2h_success": false, 00:25:14.191 "sock_priority": 0, 00:25:14.191 "abort_timeout_sec": 1, 00:25:14.191 "ack_timeout": 0, 00:25:14.191 "data_wr_pool_size": 0 00:25:14.191 } 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "method": "nvmf_create_subsystem", 00:25:14.191 "params": { 00:25:14.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.191 "allow_any_host": false, 00:25:14.191 "serial_number": "00000000000000000000", 00:25:14.191 "model_number": "SPDK bdev Controller", 00:25:14.191 "max_namespaces": 32, 00:25:14.191 "min_cntlid": 1, 00:25:14.191 "max_cntlid": 65519, 00:25:14.191 "ana_reporting": false 00:25:14.191 } 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "method": "nvmf_subsystem_add_host", 00:25:14.191 "params": { 00:25:14.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.191 "host": "nqn.2016-06.io.spdk:host1", 00:25:14.191 "psk": "key0" 00:25:14.191 } 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "method": "nvmf_subsystem_add_ns", 00:25:14.191 "params": { 00:25:14.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.191 "namespace": { 00:25:14.191 "nsid": 1, 00:25:14.191 "bdev_name": "malloc0", 00:25:14.191 "nguid": "8BCA33636E244BF7B46BABF840635A8E", 00:25:14.191 "uuid": "8bca3363-6e24-4bf7-b46b-abf840635a8e", 00:25:14.191 "no_auto_visible": false 00:25:14.191 } 00:25:14.191 } 00:25:14.191 }, 00:25:14.191 { 00:25:14.191 "method": "nvmf_subsystem_add_listener", 00:25:14.191 "params": { 00:25:14.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.191 "listen_address": { 00:25:14.191 "trtype": "TCP", 00:25:14.191 "adrfam": "IPv4", 00:25:14.191 "traddr": "10.0.0.2", 00:25:14.191 "trsvcid": "4420" 00:25:14.191 }, 00:25:14.191 "secure_channel": true 00:25:14.191 } 00:25:14.191 } 00:25:14.191 ] 00:25:14.191 } 00:25:14.191 ] 00:25:14.191 }' 00:25:14.191 13:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:14.450 13:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:25:14.450 "subsystems": [ 00:25:14.450 { 00:25:14.450 "subsystem": "keyring", 00:25:14.450 "config": [ 00:25:14.450 { 00:25:14.450 "method": "keyring_file_add_key", 00:25:14.450 "params": { 00:25:14.450 "name": "key0", 00:25:14.450 "path": "/tmp/tmp.uVXmLoEiac" 00:25:14.450 } 00:25:14.450 } 00:25:14.450 ] 00:25:14.450 }, 00:25:14.450 { 00:25:14.450 "subsystem": "iobuf", 00:25:14.450 "config": [ 00:25:14.450 { 00:25:14.450 "method": "iobuf_set_options", 00:25:14.450 "params": { 00:25:14.450 "small_pool_count": 8192, 00:25:14.450 "large_pool_count": 1024, 00:25:14.450 "small_bufsize": 8192, 00:25:14.450 "large_bufsize": 135168 00:25:14.450 } 00:25:14.450 } 00:25:14.450 ] 00:25:14.450 }, 00:25:14.450 { 00:25:14.450 "subsystem": "sock", 00:25:14.450 "config": [ 00:25:14.450 { 00:25:14.450 "method": "sock_set_default_impl", 00:25:14.450 "params": { 00:25:14.450 "impl_name": "posix" 00:25:14.450 } 00:25:14.450 }, 00:25:14.450 { 00:25:14.450 "method": "sock_impl_set_options", 00:25:14.450 "params": { 00:25:14.450 "impl_name": "ssl", 00:25:14.450 "recv_buf_size": 4096, 00:25:14.450 "send_buf_size": 4096, 00:25:14.450 "enable_recv_pipe": true, 00:25:14.450 "enable_quickack": false, 00:25:14.450 "enable_placement_id": 0, 00:25:14.450 "enable_zerocopy_send_server": true, 00:25:14.450 "enable_zerocopy_send_client": false, 00:25:14.450 "zerocopy_threshold": 0, 00:25:14.450 "tls_version": 0, 00:25:14.450 "enable_ktls": false 00:25:14.450 } 00:25:14.450 }, 00:25:14.450 { 00:25:14.450 "method": "sock_impl_set_options", 00:25:14.450 "params": { 00:25:14.450 "impl_name": "posix", 00:25:14.450 "recv_buf_size": 2097152, 00:25:14.450 "send_buf_size": 2097152, 00:25:14.450 "enable_recv_pipe": true, 00:25:14.450 "enable_quickack": false, 00:25:14.450 "enable_placement_id": 0, 00:25:14.450 "enable_zerocopy_send_server": true, 00:25:14.450 "enable_zerocopy_send_client": false, 00:25:14.450 "zerocopy_threshold": 0, 00:25:14.450 "tls_version": 0, 00:25:14.450 "enable_ktls": false 00:25:14.450 } 00:25:14.450 } 00:25:14.450 ] 00:25:14.450 }, 00:25:14.450 { 00:25:14.450 "subsystem": "vmd", 00:25:14.450 "config": [] 00:25:14.450 }, 00:25:14.450 { 00:25:14.450 "subsystem": "accel", 00:25:14.450 "config": [ 00:25:14.450 { 00:25:14.450 "method": "accel_set_options", 00:25:14.450 "params": { 00:25:14.450 "small_cache_size": 128, 00:25:14.450 "large_cache_size": 16, 00:25:14.450 "task_count": 2048, 00:25:14.450 "sequence_count": 2048, 00:25:14.450 "buf_count": 2048 00:25:14.450 } 00:25:14.450 } 00:25:14.450 ] 00:25:14.450 }, 00:25:14.450 { 00:25:14.450 "subsystem": "bdev", 00:25:14.450 "config": [ 00:25:14.450 { 00:25:14.450 "method": "bdev_set_options", 00:25:14.450 "params": { 00:25:14.450 "bdev_io_pool_size": 65535, 00:25:14.450 "bdev_io_cache_size": 256, 00:25:14.450 "bdev_auto_examine": true, 00:25:14.450 "iobuf_small_cache_size": 128, 00:25:14.450 "iobuf_large_cache_size": 16 00:25:14.450 } 00:25:14.450 }, 00:25:14.450 { 00:25:14.450 "method": "bdev_raid_set_options", 00:25:14.450 "params": { 00:25:14.450 "process_window_size_kb": 1024 00:25:14.450 } 00:25:14.450 }, 00:25:14.450 { 00:25:14.450 "method": "bdev_iscsi_set_options", 00:25:14.450 "params": { 00:25:14.450 "timeout_sec": 30 00:25:14.450 } 00:25:14.450 }, 00:25:14.450 { 00:25:14.450 "method": "bdev_nvme_set_options", 00:25:14.450 "params": { 00:25:14.450 "action_on_timeout": "none", 00:25:14.450 "timeout_us": 0, 00:25:14.450 "timeout_admin_us": 0, 00:25:14.450 "keep_alive_timeout_ms": 10000, 00:25:14.450 "arbitration_burst": 0, 00:25:14.450 "low_priority_weight": 0, 00:25:14.450 "medium_priority_weight": 0, 00:25:14.450 "high_priority_weight": 0, 00:25:14.450 "nvme_adminq_poll_period_us": 10000, 00:25:14.450 "nvme_ioq_poll_period_us": 0, 00:25:14.450 "io_queue_requests": 512, 00:25:14.450 "delay_cmd_submit": true, 00:25:14.450 "transport_retry_count": 4, 00:25:14.450 "bdev_retry_count": 3, 00:25:14.450 "transport_ack_timeout": 0, 00:25:14.450 "ctrlr_loss_timeout_sec": 0, 00:25:14.450 "reconnect_delay_sec": 0, 00:25:14.450 "fast_io_fail_timeout_sec": 0, 00:25:14.450 "disable_auto_failback": false, 00:25:14.450 "generate_uuids": false, 00:25:14.450 "transport_tos": 0, 00:25:14.450 "nvme_error_stat": false, 00:25:14.450 "rdma_srq_size": 0, 00:25:14.450 "io_path_stat": false, 00:25:14.450 "allow_accel_sequence": false, 00:25:14.450 "rdma_max_cq_size": 0, 00:25:14.450 "rdma_cm_event_timeout_ms": 0, 00:25:14.450 "dhchap_digests": [ 00:25:14.450 "sha256", 00:25:14.450 "sha384", 00:25:14.450 "sha512" 00:25:14.450 ], 00:25:14.450 "dhchap_dhgroups": [ 00:25:14.450 "null", 00:25:14.450 "ffdhe2048", 00:25:14.450 "ffdhe3072", 00:25:14.450 "ffdhe4096", 00:25:14.450 "ffdhe6144", 00:25:14.450 "ffdhe8192" 00:25:14.450 ] 00:25:14.450 } 00:25:14.450 }, 00:25:14.450 { 00:25:14.450 "method": "bdev_nvme_attach_controller", 00:25:14.450 "params": { 00:25:14.450 "name": "nvme0", 00:25:14.450 "trtype": "TCP", 00:25:14.450 "adrfam": "IPv4", 00:25:14.450 "traddr": "10.0.0.2", 00:25:14.450 "trsvcid": "4420", 00:25:14.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.450 "prchk_reftag": false, 00:25:14.450 "prchk_guard": false, 00:25:14.450 "ctrlr_loss_timeout_sec": 0, 00:25:14.450 "reconnect_delay_sec": 0, 00:25:14.450 "fast_io_fail_timeout_sec": 0, 00:25:14.450 "psk": "key0", 00:25:14.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:14.450 "hdgst": false, 00:25:14.450 "ddgst": false 00:25:14.451 } 00:25:14.451 }, 00:25:14.451 { 00:25:14.451 "method": "bdev_nvme_set_hotplug", 00:25:14.451 "params": { 00:25:14.451 "period_us": 100000, 00:25:14.451 "enable": false 00:25:14.451 } 00:25:14.451 }, 00:25:14.451 { 00:25:14.451 "method": "bdev_enable_histogram", 00:25:14.451 "params": { 00:25:14.451 "name": "nvme0n1", 00:25:14.451 "enable": true 00:25:14.451 } 00:25:14.451 }, 00:25:14.451 { 00:25:14.451 "method": "bdev_wait_for_examine" 00:25:14.451 } 00:25:14.451 ] 00:25:14.451 }, 00:25:14.451 { 00:25:14.451 "subsystem": "nbd", 00:25:14.451 "config": [] 00:25:14.451 } 00:25:14.451 ] 00:25:14.451 }' 00:25:14.451 13:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 334717 00:25:14.451 13:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 334717 ']' 00:25:14.451 13:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 334717 00:25:14.451 13:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:14.451 13:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:14.451 13:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 334717 00:25:14.451 13:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:14.451 13:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:14.451 13:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 334717' 00:25:14.451 killing process with pid 334717 00:25:14.451 13:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 334717 00:25:14.451 Received shutdown signal, test time was about 1.000000 seconds 00:25:14.451 00:25:14.451 Latency(us) 00:25:14.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.451 =================================================================================================================== 00:25:14.451 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:14.451 13:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 334717 00:25:15.384 13:36:50 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 334563 00:25:15.384 13:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 334563 ']' 00:25:15.384 13:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 334563 00:25:15.384 13:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:15.384 13:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:15.384 13:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 334563 00:25:15.384 13:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:15.384 13:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:15.384 13:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 334563' 00:25:15.384 killing process with pid 334563 00:25:15.384 13:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 334563 00:25:15.384 13:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 334563 00:25:16.758 13:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:25:16.758 13:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:16.758 13:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:25:16.758 "subsystems": [ 00:25:16.758 { 00:25:16.758 "subsystem": "keyring", 00:25:16.758 "config": [ 00:25:16.758 { 00:25:16.758 "method": "keyring_file_add_key", 00:25:16.758 "params": { 00:25:16.758 "name": "key0", 00:25:16.758 "path": "/tmp/tmp.uVXmLoEiac" 00:25:16.758 } 00:25:16.758 } 00:25:16.758 ] 00:25:16.758 }, 00:25:16.758 { 00:25:16.758 "subsystem": "iobuf", 00:25:16.758 "config": [ 00:25:16.758 { 00:25:16.758 "method": "iobuf_set_options", 00:25:16.758 "params": { 00:25:16.758 "small_pool_count": 8192, 00:25:16.758 "large_pool_count": 1024, 00:25:16.758 "small_bufsize": 8192, 00:25:16.758 "large_bufsize": 135168 00:25:16.758 } 00:25:16.758 } 00:25:16.758 ] 00:25:16.758 }, 00:25:16.758 { 00:25:16.758 "subsystem": "sock", 00:25:16.758 "config": [ 00:25:16.758 { 00:25:16.758 "method": "sock_set_default_impl", 00:25:16.758 "params": { 00:25:16.758 "impl_name": "posix" 00:25:16.758 } 00:25:16.758 }, 00:25:16.758 { 00:25:16.758 "method": "sock_impl_set_options", 00:25:16.758 "params": { 00:25:16.758 "impl_name": "ssl", 00:25:16.758 "recv_buf_size": 4096, 00:25:16.758 "send_buf_size": 4096, 00:25:16.758 "enable_recv_pipe": true, 00:25:16.758 "enable_quickack": false, 00:25:16.758 "enable_placement_id": 0, 00:25:16.758 "enable_zerocopy_send_server": true, 00:25:16.758 "enable_zerocopy_send_client": false, 00:25:16.758 "zerocopy_threshold": 0, 00:25:16.758 "tls_version": 0, 00:25:16.758 "enable_ktls": false 00:25:16.758 } 00:25:16.758 }, 00:25:16.758 { 00:25:16.758 "method": "sock_impl_set_options", 00:25:16.758 "params": { 00:25:16.758 "impl_name": "posix", 00:25:16.758 "recv_buf_size": 2097152, 00:25:16.758 "send_buf_size": 2097152, 00:25:16.758 "enable_recv_pipe": true, 00:25:16.758 "enable_quickack": false, 00:25:16.758 "enable_placement_id": 0, 00:25:16.758 "enable_zerocopy_send_server": true, 00:25:16.758 "enable_zerocopy_send_client": false, 00:25:16.758 "zerocopy_threshold": 0, 00:25:16.758 "tls_version": 0, 00:25:16.758 "enable_ktls": false 00:25:16.758 } 00:25:16.758 } 00:25:16.758 ] 00:25:16.758 }, 00:25:16.758 { 00:25:16.758 "subsystem": "vmd", 00:25:16.758 "config": [] 00:25:16.758 }, 00:25:16.758 { 00:25:16.758 "subsystem": "accel", 00:25:16.758 "config": [ 00:25:16.758 { 00:25:16.758 "method": "accel_set_options", 00:25:16.758 "params": { 00:25:16.758 "small_cache_size": 128, 00:25:16.758 "large_cache_size": 16, 00:25:16.758 "task_count": 2048, 00:25:16.758 "sequence_count": 2048, 00:25:16.758 "buf_count": 2048 00:25:16.758 } 00:25:16.758 } 00:25:16.758 ] 00:25:16.758 }, 00:25:16.758 { 00:25:16.758 "subsystem": "bdev", 00:25:16.758 "config": [ 00:25:16.758 { 00:25:16.758 "method": "bdev_set_options", 00:25:16.758 "params": { 00:25:16.758 "bdev_io_pool_size": 65535, 00:25:16.758 "bdev_io_cache_size": 256, 00:25:16.758 "bdev_auto_examine": true, 00:25:16.758 "iobuf_small_cache_size": 128, 00:25:16.758 "iobuf_large_cache_size": 16 00:25:16.758 } 00:25:16.758 }, 00:25:16.758 { 00:25:16.758 "method": "bdev_raid_set_options", 00:25:16.758 "params": { 00:25:16.758 "process_window_size_kb": 1024 00:25:16.758 } 00:25:16.758 }, 00:25:16.758 { 00:25:16.758 "method": "bdev_iscsi_set_options", 00:25:16.758 "params": { 00:25:16.758 "timeout_sec": 30 00:25:16.758 } 00:25:16.758 }, 00:25:16.758 { 00:25:16.758 "method": "bdev_nvme_set_options", 00:25:16.758 "params": { 00:25:16.758 "action_on_timeout": "none", 00:25:16.758 "timeout_us": 0, 00:25:16.758 "timeout_admin_us": 0, 00:25:16.758 "keep_alive_timeout_ms": 10000, 00:25:16.758 "arbitration_burst": 0, 00:25:16.758 "low_priority_weight": 0, 00:25:16.758 "medium_priority_weight": 0, 00:25:16.758 "high_priority_weight": 0, 00:25:16.758 "nvme_adminq_poll_period_us": 10000, 00:25:16.758 "nvme_ioq_poll_period_us": 0, 00:25:16.758 "io_queue_requests": 0, 00:25:16.758 "delay_cmd_submit": true, 00:25:16.758 "transport_retry_count": 4, 00:25:16.758 "bdev_retry_count": 3, 00:25:16.758 "transport_ack_timeout": 0, 00:25:16.758 "ctrlr_loss_timeout_sec": 0, 00:25:16.758 "reconnect_delay_sec": 0, 00:25:16.758 "fast_io_fail_timeout_sec": 0, 00:25:16.758 "disable_auto_failback": false, 00:25:16.758 "generate_uuids": false, 00:25:16.758 "transport_tos": 0, 00:25:16.758 "nvme_error_stat": false, 00:25:16.758 "rdma_srq_size": 0, 00:25:16.758 "io_path_stat": false, 00:25:16.758 "allow_accel_sequence": false, 00:25:16.758 "rdma_max_cq_size": 0, 00:25:16.758 "rdma_cm_event_timeout_ms": 0, 00:25:16.758 "dhchap_digests": [ 00:25:16.758 "sha256", 00:25:16.758 "sha384", 00:25:16.758 "sha512" 00:25:16.758 ], 00:25:16.758 "dhchap_dhgroups": [ 00:25:16.758 "null", 00:25:16.758 "ffdhe2048", 00:25:16.758 "ffdhe3072", 00:25:16.758 "ffdhe4096", 00:25:16.758 "ffdhe6144", 00:25:16.758 "ffdhe8192" 00:25:16.758 ] 00:25:16.758 } 00:25:16.758 }, 00:25:16.758 { 00:25:16.758 "method": "bdev_nvme_set_hotplug", 00:25:16.758 "params": { 00:25:16.758 "period_us": 100000, 00:25:16.758 "enable": false 00:25:16.758 } 00:25:16.758 }, 00:25:16.758 { 00:25:16.758 "method": "bdev_malloc_create", 00:25:16.758 "params": { 00:25:16.758 "name": "malloc0", 00:25:16.758 "num_blocks": 8192, 00:25:16.758 "block_size": 4096, 00:25:16.758 "physical_block_size": 4096, 00:25:16.758 "uuid": "8bca3363-6e24-4bf7-b46b-abf840635a8e", 00:25:16.759 "optimal_io_boundary": 0 00:25:16.759 } 00:25:16.759 }, 00:25:16.759 { 00:25:16.759 "method": "bdev_wait_for_examine" 00:25:16.759 } 00:25:16.759 ] 00:25:16.759 }, 00:25:16.759 { 00:25:16.759 "subsystem": "nbd", 00:25:16.759 "config": [] 00:25:16.759 }, 00:25:16.759 { 00:25:16.759 "subsystem": "scheduler", 00:25:16.759 "config": [ 00:25:16.759 { 00:25:16.759 "method": "framework_set_scheduler", 00:25:16.759 "params": { 00:25:16.759 "name": "static" 00:25:16.759 } 00:25:16.759 } 00:25:16.759 ] 00:25:16.759 }, 00:25:16.759 { 00:25:16.759 "subsystem": "nvmf", 00:25:16.759 "config": [ 00:25:16.759 { 00:25:16.759 "method": "nvmf_set_config", 00:25:16.759 "params": { 00:25:16.759 "discovery_filter": "match_any", 00:25:16.759 "admin_cmd_passthru": { 00:25:16.759 "identify_ctrlr": false 00:25:16.759 } 00:25:16.759 } 00:25:16.759 }, 00:25:16.759 { 00:25:16.759 "method": "nvmf_set_max_subsystems", 00:25:16.759 "params": { 00:25:16.759 "max_subsystems": 1024 00:25:16.759 } 00:25:16.759 }, 00:25:16.759 { 00:25:16.759 "method": "nvmf_set_crdt", 00:25:16.759 "params": { 00:25:16.759 "crdt1": 0, 00:25:16.759 "crdt2": 0, 00:25:16.759 "crdt3": 0 00:25:16.759 } 00:25:16.759 }, 00:25:16.759 { 00:25:16.759 "method": "nvmf_create_transport", 00:25:16.759 "params": { 00:25:16.759 "trtype": "TCP", 00:25:16.759 "max_queue_depth": 128, 00:25:16.759 "max_io_qpairs_per_ctrlr": 127, 00:25:16.759 "in_capsule_data_size": 4096, 00:25:16.759 "max_io_size": 131072, 00:25:16.759 "io_unit_size": 131072, 00:25:16.759 "max_aq_depth": 128, 00:25:16.759 "num_shared_buffers": 511, 00:25:16.759 "buf_cache_size": 4294967295, 00:25:16.759 "dif_insert_or_strip": false, 00:25:16.759 "zcopy": false, 00:25:16.759 "c2h_success": false, 00:25:16.759 "sock_priority": 0, 00:25:16.759 "abort_timeout_sec": 1, 00:25:16.759 "ack_timeout": 0, 00:25:16.759 "data_wr_pool_size": 0 00:25:16.759 } 00:25:16.759 }, 00:25:16.759 { 00:25:16.759 "method": "nvmf_create_subsystem", 00:25:16.759 "params": { 00:25:16.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.759 "allow_any_host": false, 00:25:16.759 "serial_number": "00000000000000000000", 00:25:16.759 "model_number": "SPDK bdev Controller", 00:25:16.759 "max_namespaces": 32, 00:25:16.759 "min_cntlid": 1, 00:25:16.759 "max_cntlid": 65519, 00:25:16.759 "ana_reporting": false 00:25:16.759 } 00:25:16.759 }, 00:25:16.759 { 00:25:16.759 "method": "nvmf_subsystem_add_host", 00:25:16.759 "params": { 00:25:16.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.759 "host": "nqn.2016-06.io.spdk:host1", 00:25:16.759 "psk": "key0" 00:25:16.759 } 00:25:16.759 }, 00:25:16.759 { 00:25:16.759 "method": "nvmf_subsystem_add_ns", 00:25:16.759 "params": { 00:25:16.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.759 "namespace": { 00:25:16.759 "nsid": 1, 00:25:16.759 "bdev_name": "malloc0", 00:25:16.759 "nguid": "8BCA33636E244BF7B46BABF840635A8E", 00:25:16.759 "uuid": "8bca3363-6e24-4bf7-b46b-abf840635a8e", 00:25:16.759 "no_auto_visible": false 00:25:16.759 } 00:25:16.759 } 00:25:16.759 }, 00:25:16.759 { 00:25:16.759 "method": "nvmf_subsystem_add_listener", 00:25:16.759 "params": { 00:25:16.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.759 "listen_address": { 00:25:16.759 "trtype": "TCP", 00:25:16.759 "adrfam": "IPv4", 00:25:16.759 "traddr": "10.0.0.2", 00:25:16.759 "trsvcid": "4420" 00:25:16.759 }, 00:25:16.759 "secure_channel": true 00:25:16.759 } 00:25:16.759 } 00:25:16.759 ] 00:25:16.759 } 00:25:16.759 ] 00:25:16.759 }' 00:25:16.759 13:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:16.759 13:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:16.759 13:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=335393 00:25:16.759 13:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:16.759 13:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 335393 00:25:16.759 13:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 335393 ']' 00:25:16.759 13:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.759 13:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:16.759 13:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.759 13:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:16.759 13:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:17.037 [2024-07-13 13:36:51.560923] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:17.037 [2024-07-13 13:36:51.561064] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.037 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.037 [2024-07-13 13:36:51.690423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.308 [2024-07-13 13:36:51.951639] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.308 [2024-07-13 13:36:51.951729] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.308 [2024-07-13 13:36:51.951761] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.308 [2024-07-13 13:36:51.951786] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.308 [2024-07-13 13:36:51.951808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.308 [2024-07-13 13:36:51.951970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.875 [2024-07-13 13:36:52.500873] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.875 [2024-07-13 13:36:52.532851] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:17.875 [2024-07-13 13:36:52.533146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.875 13:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:17.875 13:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:17.875 13:36:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:17.875 13:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:17.875 13:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:17.875 13:36:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.875 13:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=335549 00:25:17.875 13:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 335549 /var/tmp/bdevperf.sock 00:25:17.875 13:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 335549 ']' 00:25:17.875 13:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:17.875 13:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:17.875 13:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:17.875 13:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:25:17.875 "subsystems": [ 00:25:17.875 { 00:25:17.875 "subsystem": "keyring", 00:25:17.875 "config": [ 00:25:17.875 { 00:25:17.875 "method": "keyring_file_add_key", 00:25:17.875 "params": { 00:25:17.875 "name": "key0", 00:25:17.875 "path": "/tmp/tmp.uVXmLoEiac" 00:25:17.875 } 00:25:17.875 } 00:25:17.875 ] 00:25:17.875 }, 00:25:17.875 { 00:25:17.875 "subsystem": "iobuf", 00:25:17.875 "config": [ 00:25:17.875 { 00:25:17.875 "method": "iobuf_set_options", 00:25:17.875 "params": { 00:25:17.875 "small_pool_count": 8192, 00:25:17.875 "large_pool_count": 1024, 00:25:17.875 "small_bufsize": 8192, 00:25:17.875 "large_bufsize": 135168 00:25:17.875 } 00:25:17.875 } 00:25:17.875 ] 00:25:17.875 }, 00:25:17.875 { 00:25:17.875 "subsystem": "sock", 00:25:17.875 "config": [ 00:25:17.875 { 00:25:17.875 "method": "sock_set_default_impl", 00:25:17.875 "params": { 00:25:17.875 "impl_name": "posix" 00:25:17.875 } 00:25:17.875 }, 00:25:17.875 { 00:25:17.875 "method": "sock_impl_set_options", 00:25:17.875 "params": { 00:25:17.875 "impl_name": "ssl", 00:25:17.875 "recv_buf_size": 4096, 00:25:17.875 "send_buf_size": 4096, 00:25:17.875 "enable_recv_pipe": true, 00:25:17.875 "enable_quickack": false, 00:25:17.875 "enable_placement_id": 0, 00:25:17.875 "enable_zerocopy_send_server": true, 00:25:17.875 "enable_zerocopy_send_client": false, 00:25:17.875 "zerocopy_threshold": 0, 00:25:17.875 "tls_version": 0, 00:25:17.875 "enable_ktls": false 00:25:17.875 } 00:25:17.875 }, 00:25:17.875 { 00:25:17.875 "method": "sock_impl_set_options", 00:25:17.875 "params": { 00:25:17.875 "impl_name": "posix", 00:25:17.875 "recv_buf_size": 2097152, 00:25:17.875 "send_buf_size": 2097152, 00:25:17.875 "enable_recv_pipe": true, 00:25:17.875 "enable_quickack": false, 00:25:17.875 "enable_placement_id": 0, 00:25:17.875 "enable_zerocopy_send_server": true, 00:25:17.875 "enable_zerocopy_send_client": false, 00:25:17.875 "zerocopy_threshold": 0, 00:25:17.875 "tls_version": 0, 00:25:17.875 "enable_ktls": false 00:25:17.875 } 00:25:17.875 } 00:25:17.875 ] 00:25:17.875 }, 00:25:17.875 { 00:25:17.875 "subsystem": "vmd", 00:25:17.875 "config": [] 00:25:17.875 }, 00:25:17.875 { 00:25:17.875 "subsystem": "accel", 00:25:17.875 "config": [ 00:25:17.875 { 00:25:17.875 "method": "accel_set_options", 00:25:17.875 "params": { 00:25:17.875 "small_cache_size": 128, 00:25:17.875 "large_cache_size": 16, 00:25:17.875 "task_count": 2048, 00:25:17.875 "sequence_count": 2048, 00:25:17.875 "buf_count": 2048 00:25:17.875 } 00:25:17.875 } 00:25:17.875 ] 00:25:17.875 }, 00:25:17.875 { 00:25:17.875 "subsystem": "bdev", 00:25:17.875 "config": [ 00:25:17.875 { 00:25:17.875 "method": "bdev_set_options", 00:25:17.875 "params": { 00:25:17.875 "bdev_io_pool_size": 65535, 00:25:17.875 "bdev_io_cache_size": 256, 00:25:17.875 "bdev_auto_examine": true, 00:25:17.875 "iobuf_small_cache_size": 128, 00:25:17.875 "iobuf_large_cache_size": 16 00:25:17.875 } 00:25:17.875 }, 00:25:17.875 { 00:25:17.875 "method": "bdev_raid_set_options", 00:25:17.875 "params": { 00:25:17.875 "process_window_size_kb": 1024 00:25:17.875 } 00:25:17.875 }, 00:25:17.875 { 00:25:17.875 "method": "bdev_iscsi_set_options", 00:25:17.875 "params": { 00:25:17.875 "timeout_sec": 30 00:25:17.875 } 00:25:17.875 }, 00:25:17.875 { 00:25:17.875 "method": "bdev_nvme_set_options", 00:25:17.875 "params": { 00:25:17.875 "action_on_timeout": "none", 00:25:17.876 "timeout_us": 0, 00:25:17.876 "timeout_admin_us": 0, 00:25:17.876 "keep_alive_timeout_ms": 10000, 00:25:17.876 "arbitration_burst": 0, 00:25:17.876 "low_priority_weight": 0, 00:25:17.876 "medium_priority_weight": 0, 00:25:17.876 "high_priority_weight": 0, 00:25:17.876 "nvme_adminq_poll_period_us": 10000, 00:25:17.876 "nvme_ioq_poll_period_us": 0, 00:25:17.876 "io_queue_requests": 512, 00:25:17.876 "delay_cmd_submit": true, 00:25:17.876 "transport_retry_count": 4, 00:25:17.876 "bdev_retry_count": 3, 00:25:17.876 "transport_ack_timeout": 0, 00:25:17.876 "ctrlr_loss_timeout_sec": 0, 00:25:17.876 "reconnect_delay_sec": 0, 00:25:17.876 "fast_io_fail_timeout_sec": 0, 00:25:17.876 "disable_auto_failback": false, 00:25:17.876 "generate_uuids": false, 00:25:17.876 "transport_tos": 0, 00:25:17.876 "nvme_error_stat": false, 00:25:17.876 "rdma_srq_size": 0, 00:25:17.876 "io_path_stat": false, 00:25:17.876 "allow_accel_sequence": false, 00:25:17.876 "rdma_max_cq_size": 0, 00:25:17.876 "rdma_cm_event_timeout_ms": 0, 00:25:17.876 "dhchap_digests": [ 00:25:17.876 "sha256", 00:25:17.876 "sha384", 00:25:17.876 "sh 13:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:17.876 a512" 00:25:17.876 ], 00:25:17.876 "dhchap_dhgroups": [ 00:25:17.876 "null", 00:25:17.876 "ffdhe2048", 00:25:17.876 "ffdhe3072", 00:25:17.876 "ffdhe4096", 00:25:17.876 "ffdhe6144", 00:25:17.876 "ffdhe8192" 00:25:17.876 ] 00:25:17.876 } 00:25:17.876 }, 00:25:17.876 { 00:25:17.876 "method": "bdev_nvme_attach_controller", 00:25:17.876 "params": { 00:25:17.876 "name": "nvme0", 00:25:17.876 "trtype": "TCP", 00:25:17.876 "adrfam": "IPv4", 00:25:17.876 "traddr": "10.0.0.2", 00:25:17.876 "trsvcid": "4420", 00:25:17.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.876 "prchk_reftag": false, 00:25:17.876 "prchk_guard": false, 00:25:17.876 "ctrlr_loss_timeout_sec": 0, 00:25:17.876 "reconnect_delay_sec": 0, 00:25:17.876 "fast_io_fail_timeout_sec": 0, 00:25:17.876 "psk": "key0", 00:25:17.876 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:17.876 "hdgst": false, 00:25:17.876 "ddgst": false 00:25:17.876 } 00:25:17.876 }, 00:25:17.876 { 00:25:17.876 "method": "bdev_nvme_set_hotplug", 00:25:17.876 "params": { 00:25:17.876 "period_us": 100000, 00:25:17.876 "enable": false 00:25:17.876 } 00:25:17.876 }, 00:25:17.876 { 00:25:17.876 "method": "bdev_enable_histogram", 00:25:17.876 "params": { 00:25:17.876 "name": "nvme0n1", 00:25:17.876 "enable": true 00:25:17.876 } 00:25:17.876 }, 00:25:17.876 { 00:25:17.876 "method": "bdev_wait_for_examine" 00:25:17.876 } 00:25:17.876 ] 00:25:17.876 }, 00:25:17.876 { 00:25:17.876 "subsystem": "nbd", 00:25:17.876 "config": [] 00:25:17.876 } 00:25:17.876 ] 00:25:17.876 }' 00:25:17.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:17.876 13:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:17.876 13:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.133 [2024-07-13 13:36:52.669031] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:18.133 [2024-07-13 13:36:52.669175] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335549 ] 00:25:18.133 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.133 [2024-07-13 13:36:52.793601] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.391 [2024-07-13 13:36:53.023894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.956 [2024-07-13 13:36:53.425000] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:18.956 13:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:18.956 13:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:18.956 13:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:18.956 13:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:25:19.214 13:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.214 13:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:19.471 Running I/O for 1 seconds... 00:25:20.404 00:25:20.404 Latency(us) 00:25:20.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.404 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:20.404 Verification LBA range: start 0x0 length 0x2000 00:25:20.404 nvme0n1 : 1.05 2438.96 9.53 0.00 0.00 51418.28 11990.66 85827.89 00:25:20.404 =================================================================================================================== 00:25:20.404 Total : 2438.96 9.53 0.00 0.00 51418.28 11990.66 85827.89 00:25:20.404 0 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:20.404 nvmf_trace.0 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 335549 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 335549 ']' 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 335549 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 335549 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 335549' 00:25:20.404 killing process with pid 335549 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 335549 00:25:20.404 Received shutdown signal, test time was about 1.000000 seconds 00:25:20.404 00:25:20.404 Latency(us) 00:25:20.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.404 =================================================================================================================== 00:25:20.404 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.404 13:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 335549 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:21.779 rmmod nvme_tcp 00:25:21.779 rmmod nvme_fabrics 00:25:21.779 rmmod nvme_keyring 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 335393 ']' 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 335393 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 335393 ']' 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 335393 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 335393 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 335393' 00:25:21.779 killing process with pid 335393 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 335393 00:25:21.779 13:36:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 335393 00:25:23.154 13:36:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:23.154 13:36:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:23.154 13:36:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:23.154 13:36:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:23.154 13:36:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:23.154 13:36:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.154 13:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:23.154 13:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.060 13:36:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:25.060 13:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.QpnaoGhCcJ /tmp/tmp.wM9TMovaaH /tmp/tmp.uVXmLoEiac 00:25:25.060 00:25:25.060 real 1m50.395s 00:25:25.060 user 3m0.016s 00:25:25.060 sys 0m26.230s 00:25:25.060 13:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:25.060 13:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:25.060 ************************************ 00:25:25.060 END TEST nvmf_tls 00:25:25.060 ************************************ 00:25:25.060 13:36:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:25.060 13:36:59 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:25.060 13:36:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:25.060 13:36:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:25.060 13:36:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:25.060 ************************************ 00:25:25.060 START TEST nvmf_fips 00:25:25.060 ************************************ 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:25.060 * Looking for test storage... 00:25:25.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.060 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:25:25.061 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:25:25.321 Error setting digest 00:25:25.321 00B2BD61DB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:25.321 00B2BD61DB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:25:25.321 13:36:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:27.222 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:27.222 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.222 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:27.222 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:27.223 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.223 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.480 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:27.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:25:27.480 00:25:27.480 --- 10.0.0.2 ping statistics --- 00:25:27.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.480 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:25:27.480 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:25:27.480 00:25:27.480 --- 10.0.0.1 ping statistics --- 00:25:27.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.480 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:25:27.480 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.480 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:25:27.480 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:27.480 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.480 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:27.480 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:27.480 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.480 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:27.480 13:37:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:27.480 13:37:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:25:27.480 13:37:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:27.480 13:37:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:27.480 13:37:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:27.480 13:37:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=338043 00:25:27.480 13:37:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:27.480 13:37:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 338043 00:25:27.480 13:37:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 338043 ']' 00:25:27.480 13:37:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.480 13:37:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:27.480 13:37:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.480 13:37:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:27.480 13:37:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:27.480 [2024-07-13 13:37:02.137295] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:27.480 [2024-07-13 13:37:02.137428] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.480 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.737 [2024-07-13 13:37:02.271631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.994 [2024-07-13 13:37:02.525606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.994 [2024-07-13 13:37:02.525702] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.994 [2024-07-13 13:37:02.525730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.994 [2024-07-13 13:37:02.525751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.994 [2024-07-13 13:37:02.525772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.994 [2024-07-13 13:37:02.525824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.559 13:37:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:28.559 13:37:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:25:28.559 13:37:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:28.559 13:37:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:28.559 13:37:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:28.559 13:37:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.559 13:37:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:25:28.559 13:37:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:28.559 13:37:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:28.559 13:37:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:28.559 13:37:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:28.559 13:37:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:28.559 13:37:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:28.559 13:37:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:28.559 [2024-07-13 13:37:03.266143] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.559 [2024-07-13 13:37:03.282095] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:28.559 [2024-07-13 13:37:03.282400] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.816 [2024-07-13 13:37:03.357402] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:28.816 malloc0 00:25:28.816 13:37:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:28.816 13:37:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=338322 00:25:28.816 13:37:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:28.816 13:37:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 338322 /var/tmp/bdevperf.sock 00:25:28.816 13:37:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 338322 ']' 00:25:28.816 13:37:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:28.816 13:37:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:28.816 13:37:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:28.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:28.816 13:37:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:28.816 13:37:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:28.816 [2024-07-13 13:37:03.495550] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:28.816 [2024-07-13 13:37:03.495688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid338322 ] 00:25:29.073 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.073 [2024-07-13 13:37:03.615631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.331 [2024-07-13 13:37:03.839124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:29.896 13:37:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:29.896 13:37:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:25:29.896 13:37:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:29.896 [2024-07-13 13:37:04.623872] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:29.896 [2024-07-13 13:37:04.624084] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:30.154 TLSTESTn1 00:25:30.154 13:37:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:30.154 Running I/O for 10 seconds... 00:25:42.377 00:25:42.377 Latency(us) 00:25:42.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.377 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:42.377 Verification LBA range: start 0x0 length 0x2000 00:25:42.377 TLSTESTn1 : 10.05 2561.39 10.01 0.00 0.00 49830.60 9951.76 66021.45 00:25:42.377 =================================================================================================================== 00:25:42.377 Total : 2561.39 10.01 0.00 0.00 49830.60 9951.76 66021.45 00:25:42.377 0 00:25:42.377 13:37:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:42.377 13:37:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:42.377 13:37:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:25:42.377 13:37:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:25:42.377 13:37:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:42.377 13:37:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:42.377 13:37:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:42.377 13:37:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:42.377 13:37:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:42.377 13:37:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:42.377 nvmf_trace.0 00:25:42.377 13:37:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:25:42.377 13:37:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 338322 00:25:42.377 13:37:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 338322 ']' 00:25:42.377 13:37:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 338322 00:25:42.377 13:37:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:42.377 13:37:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:42.377 13:37:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 338322 00:25:42.377 13:37:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:42.377 13:37:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:42.377 13:37:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 338322' 00:25:42.377 killing process with pid 338322 00:25:42.377 13:37:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 338322 00:25:42.377 Received shutdown signal, test time was about 10.000000 seconds 00:25:42.377 00:25:42.377 Latency(us) 00:25:42.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.377 =================================================================================================================== 00:25:42.377 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:42.377 [2024-07-13 13:37:15.023084] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:42.377 13:37:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 338322 00:25:42.377 13:37:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:42.377 13:37:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:42.377 13:37:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:25:42.377 13:37:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:42.377 13:37:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:25:42.377 13:37:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.377 13:37:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:42.377 rmmod nvme_tcp 00:25:42.377 rmmod nvme_fabrics 00:25:42.377 rmmod nvme_keyring 00:25:42.377 13:37:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.377 13:37:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:25:42.377 13:37:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:25:42.377 13:37:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 338043 ']' 00:25:42.377 13:37:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 338043 00:25:42.377 13:37:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 338043 ']' 00:25:42.377 13:37:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 338043 00:25:42.377 13:37:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:42.377 13:37:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:42.377 13:37:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 338043 00:25:42.377 13:37:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:42.377 13:37:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:42.377 13:37:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 338043' 00:25:42.378 killing process with pid 338043 00:25:42.378 13:37:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 338043 00:25:42.378 [2024-07-13 13:37:16.066858] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:42.378 13:37:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 338043 00:25:42.634 13:37:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:42.634 13:37:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:42.634 13:37:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:42.634 13:37:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.634 13:37:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:42.634 13:37:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.893 13:37:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.893 13:37:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:44.794 00:25:44.794 real 0m19.734s 00:25:44.794 user 0m26.511s 00:25:44.794 sys 0m5.386s 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:44.794 ************************************ 00:25:44.794 END TEST nvmf_fips 00:25:44.794 ************************************ 00:25:44.794 13:37:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:44.794 13:37:19 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:25:44.794 13:37:19 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:44.794 13:37:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:44.794 13:37:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:44.794 13:37:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:44.794 ************************************ 00:25:44.794 START TEST nvmf_fuzz 00:25:44.794 ************************************ 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:44.794 * Looking for test storage... 00:25:44.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.794 13:37:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.053 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:45.053 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:45.053 13:37:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:25:45.053 13:37:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:46.955 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.955 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:46.956 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:46.956 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:46.956 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:46.956 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:47.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:25:47.214 00:25:47.214 --- 10.0.0.2 ping statistics --- 00:25:47.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.214 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:47.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:25:47.214 00:25:47.214 --- 10.0.0.1 ping statistics --- 00:25:47.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.214 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=341832 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 341832 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 341832 ']' 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:47.214 13:37:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.215 13:37:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:47.215 13:37:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:48.149 Malloc0 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:48.149 13:37:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:20.207 Fuzzing completed. Shutting down the fuzz application 00:26:20.207 00:26:20.207 Dumping successful admin opcodes: 00:26:20.207 8, 9, 10, 24, 00:26:20.207 Dumping successful io opcodes: 00:26:20.207 0, 9, 00:26:20.207 NS: 0x200003aefec0 I/O qp, Total commands completed: 324506, total successful commands: 1914, random_seed: 4232324288 00:26:20.207 NS: 0x200003aefec0 admin qp, Total commands completed: 40880, total successful commands: 333, random_seed: 1276520704 00:26:20.207 13:37:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:21.141 Fuzzing completed. Shutting down the fuzz application 00:26:21.141 00:26:21.141 Dumping successful admin opcodes: 00:26:21.141 24, 00:26:21.141 Dumping successful io opcodes: 00:26:21.141 00:26:21.141 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 234757357 00:26:21.141 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 234975778 00:26:21.141 13:37:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:21.141 13:37:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.141 13:37:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:21.400 rmmod nvme_tcp 00:26:21.400 rmmod nvme_fabrics 00:26:21.400 rmmod nvme_keyring 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 341832 ']' 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 341832 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 341832 ']' 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 341832 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 341832 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 341832' 00:26:21.400 killing process with pid 341832 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 341832 00:26:21.400 13:37:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 341832 00:26:22.808 13:37:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:22.808 13:37:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:22.808 13:37:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:22.808 13:37:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:22.808 13:37:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:22.808 13:37:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.808 13:37:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:22.808 13:37:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.341 13:37:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:25.341 13:37:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:25.341 00:26:25.341 real 0m40.106s 00:26:25.341 user 0m57.347s 00:26:25.341 sys 0m13.813s 00:26:25.341 13:37:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:25.341 13:37:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:25.341 ************************************ 00:26:25.341 END TEST nvmf_fuzz 00:26:25.341 ************************************ 00:26:25.341 13:37:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:25.341 13:37:59 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:25.341 13:37:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:25.341 13:37:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.341 13:37:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:25.341 ************************************ 00:26:25.341 START TEST nvmf_multiconnection 00:26:25.341 ************************************ 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:25.341 * Looking for test storage... 00:26:25.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:26:25.341 13:37:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.243 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:27.243 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:26:27.243 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:27.243 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:27.243 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:27.243 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:27.243 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:27.243 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:26:27.243 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:27.244 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:27.244 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:27.244 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:27.244 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:27.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:26:27.244 00:26:27.244 --- 10.0.0.2 ping statistics --- 00:26:27.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.244 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:26:27.244 00:26:27.244 --- 10.0.0.1 ping statistics --- 00:26:27.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.244 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=347819 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 347819 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 347819 ']' 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:27.244 13:38:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.244 [2024-07-13 13:38:01.918725] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:27.244 [2024-07-13 13:38:01.918889] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.503 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.503 [2024-07-13 13:38:02.056002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:27.761 [2024-07-13 13:38:02.315450] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.761 [2024-07-13 13:38:02.315523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.761 [2024-07-13 13:38:02.315551] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.761 [2024-07-13 13:38:02.315573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.761 [2024-07-13 13:38:02.315598] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.761 [2024-07-13 13:38:02.315744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.761 [2024-07-13 13:38:02.315816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.761 [2024-07-13 13:38:02.315918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.761 [2024-07-13 13:38:02.315926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.326 [2024-07-13 13:38:02.861361] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.326 Malloc1 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.326 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.326 [2024-07-13 13:38:02.968377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.327 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.327 13:38:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.327 13:38:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:28.327 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.327 13:38:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.327 Malloc2 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.327 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.586 Malloc3 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.586 Malloc4 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.586 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.845 Malloc5 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.845 Malloc6 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.845 Malloc7 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.845 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 Malloc8 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 Malloc9 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.104 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.363 Malloc10 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.363 Malloc11 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.363 13:38:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:29.930 13:38:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:29.930 13:38:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:29.930 13:38:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:29.930 13:38:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:29.930 13:38:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:32.457 13:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:32.457 13:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:32.457 13:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:26:32.457 13:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:32.457 13:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:32.457 13:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:32.457 13:38:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.457 13:38:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:32.715 13:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:32.715 13:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:32.715 13:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:32.715 13:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:32.715 13:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:34.613 13:38:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:34.613 13:38:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:34.613 13:38:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:26:34.613 13:38:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:34.613 13:38:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:34.613 13:38:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:34.613 13:38:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.613 13:38:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:35.548 13:38:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:35.548 13:38:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:35.548 13:38:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:35.548 13:38:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:35.548 13:38:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:37.446 13:38:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:37.446 13:38:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:37.446 13:38:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:37.446 13:38:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:37.446 13:38:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:37.446 13:38:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:37.446 13:38:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:37.446 13:38:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:38.378 13:38:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:38.378 13:38:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:38.378 13:38:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:38.378 13:38:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:38.378 13:38:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:40.307 13:38:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:40.307 13:38:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:40.307 13:38:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:40.307 13:38:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:40.307 13:38:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:40.307 13:38:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:40.307 13:38:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:40.307 13:38:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:40.872 13:38:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:40.872 13:38:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:40.872 13:38:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:40.872 13:38:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:40.872 13:38:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:43.397 13:38:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:43.397 13:38:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:43.397 13:38:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:43.397 13:38:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:43.397 13:38:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:43.397 13:38:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:43.397 13:38:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.397 13:38:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:43.655 13:38:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:43.655 13:38:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:43.655 13:38:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:43.655 13:38:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:43.655 13:38:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:46.213 13:38:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:46.213 13:38:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:46.213 13:38:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:46.213 13:38:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:46.213 13:38:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:46.213 13:38:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:46.213 13:38:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.213 13:38:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:46.472 13:38:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:46.472 13:38:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:46.472 13:38:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:46.472 13:38:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:46.472 13:38:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:48.997 13:38:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:48.998 13:38:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:48.998 13:38:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:48.998 13:38:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:48.998 13:38:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:48.998 13:38:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:48.998 13:38:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.998 13:38:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:49.561 13:38:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:49.561 13:38:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:49.561 13:38:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:49.561 13:38:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:49.561 13:38:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:51.454 13:38:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:51.454 13:38:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:51.454 13:38:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:51.454 13:38:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:51.454 13:38:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:51.454 13:38:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:51.454 13:38:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.454 13:38:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:52.387 13:38:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:52.387 13:38:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:52.387 13:38:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:52.387 13:38:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:52.387 13:38:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:54.287 13:38:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:54.287 13:38:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:54.287 13:38:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:54.287 13:38:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:54.287 13:38:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:54.287 13:38:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:54.287 13:38:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.287 13:38:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:55.663 13:38:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:55.663 13:38:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:55.663 13:38:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:55.663 13:38:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:55.663 13:38:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:57.562 13:38:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:57.562 13:38:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:57.562 13:38:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:26:57.562 13:38:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:57.562 13:38:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:57.562 13:38:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:57.562 13:38:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:57.562 13:38:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:58.495 13:38:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:58.495 13:38:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:58.495 13:38:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:58.495 13:38:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:58.495 13:38:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:00.389 13:38:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:00.389 13:38:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:00.389 13:38:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:27:00.389 13:38:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:00.389 13:38:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:00.389 13:38:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:00.389 13:38:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:00.389 [global] 00:27:00.389 thread=1 00:27:00.389 invalidate=1 00:27:00.389 rw=read 00:27:00.389 time_based=1 00:27:00.389 runtime=10 00:27:00.389 ioengine=libaio 00:27:00.389 direct=1 00:27:00.389 bs=262144 00:27:00.389 iodepth=64 00:27:00.389 norandommap=1 00:27:00.389 numjobs=1 00:27:00.389 00:27:00.389 [job0] 00:27:00.389 filename=/dev/nvme0n1 00:27:00.389 [job1] 00:27:00.389 filename=/dev/nvme10n1 00:27:00.389 [job2] 00:27:00.389 filename=/dev/nvme1n1 00:27:00.389 [job3] 00:27:00.389 filename=/dev/nvme2n1 00:27:00.389 [job4] 00:27:00.389 filename=/dev/nvme3n1 00:27:00.389 [job5] 00:27:00.389 filename=/dev/nvme4n1 00:27:00.389 [job6] 00:27:00.389 filename=/dev/nvme5n1 00:27:00.389 [job7] 00:27:00.389 filename=/dev/nvme6n1 00:27:00.389 [job8] 00:27:00.389 filename=/dev/nvme7n1 00:27:00.389 [job9] 00:27:00.389 filename=/dev/nvme8n1 00:27:00.389 [job10] 00:27:00.389 filename=/dev/nvme9n1 00:27:00.389 Could not set queue depth (nvme0n1) 00:27:00.389 Could not set queue depth (nvme10n1) 00:27:00.389 Could not set queue depth (nvme1n1) 00:27:00.389 Could not set queue depth (nvme2n1) 00:27:00.389 Could not set queue depth (nvme3n1) 00:27:00.389 Could not set queue depth (nvme4n1) 00:27:00.389 Could not set queue depth (nvme5n1) 00:27:00.389 Could not set queue depth (nvme6n1) 00:27:00.389 Could not set queue depth (nvme7n1) 00:27:00.389 Could not set queue depth (nvme8n1) 00:27:00.389 Could not set queue depth (nvme9n1) 00:27:00.646 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:00.646 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:00.646 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:00.646 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:00.646 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:00.646 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:00.646 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:00.646 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:00.646 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:00.646 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:00.646 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:00.646 fio-3.35 00:27:00.646 Starting 11 threads 00:27:12.842 00:27:12.842 job0: (groupid=0, jobs=1): err= 0: pid=352229: Sat Jul 13 13:38:45 2024 00:27:12.842 read: IOPS=454, BW=114MiB/s (119MB/s)(1148MiB/10101msec) 00:27:12.842 slat (usec): min=8, max=175341, avg=1298.94, stdev=7516.31 00:27:12.842 clat (usec): min=1006, max=403375, avg=139297.74, stdev=86064.77 00:27:12.842 lat (usec): min=1024, max=403392, avg=140596.68, stdev=86750.67 00:27:12.842 clat percentiles (msec): 00:27:12.842 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 17], 20.00th=[ 48], 00:27:12.842 | 30.00th=[ 91], 40.00th=[ 122], 50.00th=[ 148], 60.00th=[ 167], 00:27:12.842 | 70.00th=[ 184], 80.00th=[ 203], 90.00th=[ 236], 95.00th=[ 296], 00:27:12.842 | 99.00th=[ 376], 99.50th=[ 393], 99.90th=[ 401], 99.95th=[ 401], 00:27:12.842 | 99.99th=[ 405] 00:27:12.842 bw ( KiB/s): min=77312, max=188416, per=8.99%, avg=115950.85, stdev=34202.71, samples=20 00:27:12.842 iops : min= 302, max= 736, avg=452.90, stdev=133.55, samples=20 00:27:12.842 lat (msec) : 2=0.72%, 4=0.72%, 10=4.33%, 20=5.38%, 50=9.10% 00:27:12.842 lat (msec) : 100=12.74%, 250=59.76%, 500=7.25% 00:27:12.842 cpu : usr=0.28%, sys=1.28%, ctx=1000, majf=0, minf=4097 00:27:12.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:27:12.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.842 issued rwts: total=4593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.842 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.842 job1: (groupid=0, jobs=1): err= 0: pid=352230: Sat Jul 13 13:38:45 2024 00:27:12.842 read: IOPS=418, BW=105MiB/s (110MB/s)(1061MiB/10148msec) 00:27:12.842 slat (usec): min=9, max=315543, avg=1963.30, stdev=8306.63 00:27:12.842 clat (msec): min=2, max=546, avg=150.86, stdev=85.28 00:27:12.842 lat (msec): min=2, max=547, avg=152.82, stdev=86.60 00:27:12.842 clat percentiles (msec): 00:27:12.842 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 47], 20.00th=[ 70], 00:27:12.842 | 30.00th=[ 92], 40.00th=[ 120], 50.00th=[ 142], 60.00th=[ 174], 00:27:12.842 | 70.00th=[ 209], 80.00th=[ 228], 90.00th=[ 251], 95.00th=[ 279], 00:27:12.842 | 99.00th=[ 401], 99.50th=[ 422], 99.90th=[ 426], 99.95th=[ 426], 00:27:12.842 | 99.99th=[ 550] 00:27:12.842 bw ( KiB/s): min=31807, max=286658, per=8.30%, avg=107033.65, stdev=59719.78, samples=20 00:27:12.842 iops : min= 124, max= 1119, avg=418.05, stdev=233.18, samples=20 00:27:12.842 lat (msec) : 4=0.21%, 10=1.72%, 20=2.31%, 50=6.43%, 100=22.61% 00:27:12.842 lat (msec) : 250=56.96%, 500=9.73%, 750=0.02% 00:27:12.842 cpu : usr=0.19%, sys=1.46%, ctx=824, majf=0, minf=4097 00:27:12.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:12.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.842 issued rwts: total=4245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.842 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.842 job2: (groupid=0, jobs=1): err= 0: pid=352231: Sat Jul 13 13:38:45 2024 00:27:12.842 read: IOPS=418, BW=105MiB/s (110MB/s)(1066MiB/10196msec) 00:27:12.842 slat (usec): min=9, max=196958, avg=1544.93, stdev=8369.78 00:27:12.842 clat (usec): min=1013, max=419383, avg=151408.49, stdev=97269.04 00:27:12.842 lat (usec): min=1111, max=430319, avg=152953.42, stdev=98367.16 00:27:12.842 clat percentiles (msec): 00:27:12.842 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 16], 20.00th=[ 29], 00:27:12.842 | 30.00th=[ 77], 40.00th=[ 148], 50.00th=[ 171], 60.00th=[ 190], 00:27:12.842 | 70.00th=[ 207], 80.00th=[ 230], 90.00th=[ 279], 95.00th=[ 313], 00:27:12.842 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 405], 99.95th=[ 418], 00:27:12.842 | 99.99th=[ 418] 00:27:12.842 bw ( KiB/s): min=47104, max=192000, per=8.33%, avg=107476.90, stdev=38616.02, samples=20 00:27:12.842 iops : min= 184, max= 750, avg=419.80, stdev=150.79, samples=20 00:27:12.842 lat (msec) : 2=0.23%, 4=0.75%, 10=3.82%, 20=9.71%, 50=10.58% 00:27:12.842 lat (msec) : 100=7.70%, 250=53.97%, 500=13.23% 00:27:12.842 cpu : usr=0.14%, sys=1.28%, ctx=861, majf=0, minf=3724 00:27:12.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:12.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.842 issued rwts: total=4262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.842 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.842 job3: (groupid=0, jobs=1): err= 0: pid=352234: Sat Jul 13 13:38:45 2024 00:27:12.842 read: IOPS=311, BW=77.8MiB/s (81.5MB/s)(792MiB/10186msec) 00:27:12.842 slat (usec): min=11, max=272427, avg=3016.21, stdev=10741.72 00:27:12.842 clat (msec): min=2, max=496, avg=202.56, stdev=65.84 00:27:12.842 lat (msec): min=2, max=496, avg=205.58, stdev=67.06 00:27:12.842 clat percentiles (msec): 00:27:12.842 | 1.00th=[ 10], 5.00th=[ 92], 10.00th=[ 125], 20.00th=[ 163], 00:27:12.842 | 30.00th=[ 178], 40.00th=[ 192], 50.00th=[ 203], 60.00th=[ 220], 00:27:12.842 | 70.00th=[ 234], 80.00th=[ 253], 90.00th=[ 275], 95.00th=[ 305], 00:27:12.843 | 99.00th=[ 351], 99.50th=[ 380], 99.90th=[ 401], 99.95th=[ 401], 00:27:12.843 | 99.99th=[ 498] 00:27:12.843 bw ( KiB/s): min=42496, max=123904, per=6.16%, avg=79479.15, stdev=18267.94, samples=20 00:27:12.843 iops : min= 166, max= 484, avg=310.45, stdev=71.35, samples=20 00:27:12.843 lat (msec) : 4=0.03%, 10=1.01%, 20=1.01%, 50=1.58%, 100=2.43% 00:27:12.843 lat (msec) : 250=72.76%, 500=21.18% 00:27:12.843 cpu : usr=0.16%, sys=1.24%, ctx=650, majf=0, minf=4097 00:27:12.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:12.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.843 issued rwts: total=3168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.843 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.843 job4: (groupid=0, jobs=1): err= 0: pid=352235: Sat Jul 13 13:38:45 2024 00:27:12.843 read: IOPS=502, BW=126MiB/s (132MB/s)(1277MiB/10156msec) 00:27:12.843 slat (usec): min=8, max=160600, avg=1081.95, stdev=6914.62 00:27:12.843 clat (usec): min=1071, max=381664, avg=126014.59, stdev=92608.02 00:27:12.843 lat (usec): min=1097, max=381732, avg=127096.53, stdev=93522.36 00:27:12.843 clat percentiles (msec): 00:27:12.843 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 11], 20.00th=[ 26], 00:27:12.843 | 30.00th=[ 59], 40.00th=[ 79], 50.00th=[ 113], 60.00th=[ 157], 00:27:12.843 | 70.00th=[ 184], 80.00th=[ 218], 90.00th=[ 251], 95.00th=[ 288], 00:27:12.843 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 376], 99.95th=[ 380], 00:27:12.843 | 99.99th=[ 380] 00:27:12.843 bw ( KiB/s): min=63488, max=223744, per=10.01%, avg=129160.90, stdev=48414.45, samples=20 00:27:12.843 iops : min= 248, max= 874, avg=504.50, stdev=189.10, samples=20 00:27:12.843 lat (msec) : 2=0.96%, 4=2.31%, 10=6.13%, 20=8.67%, 50=9.04% 00:27:12.843 lat (msec) : 100=20.05%, 250=42.82%, 500=10.02% 00:27:12.843 cpu : usr=0.18%, sys=1.51%, ctx=1286, majf=0, minf=4097 00:27:12.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:12.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.843 issued rwts: total=5108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.843 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.843 job5: (groupid=0, jobs=1): err= 0: pid=352238: Sat Jul 13 13:38:45 2024 00:27:12.843 read: IOPS=331, BW=82.8MiB/s (86.8MB/s)(840MiB/10148msec) 00:27:12.843 slat (usec): min=12, max=215708, avg=2725.30, stdev=10105.33 00:27:12.843 clat (msec): min=4, max=470, avg=190.47, stdev=72.76 00:27:12.843 lat (msec): min=4, max=477, avg=193.19, stdev=74.18 00:27:12.843 clat percentiles (msec): 00:27:12.843 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 55], 20.00th=[ 157], 00:27:12.843 | 30.00th=[ 174], 40.00th=[ 188], 50.00th=[ 201], 60.00th=[ 213], 00:27:12.843 | 70.00th=[ 228], 80.00th=[ 243], 90.00th=[ 266], 95.00th=[ 292], 00:27:12.843 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 397], 99.95th=[ 472], 00:27:12.843 | 99.99th=[ 472] 00:27:12.843 bw ( KiB/s): min=52224, max=165888, per=6.54%, avg=84344.45, stdev=28577.12, samples=20 00:27:12.843 iops : min= 204, max= 648, avg=329.45, stdev=111.64, samples=20 00:27:12.843 lat (msec) : 10=1.19%, 20=3.51%, 50=4.85%, 100=1.76%, 250=72.79% 00:27:12.843 lat (msec) : 500=15.90% 00:27:12.843 cpu : usr=0.30%, sys=1.14%, ctx=687, majf=0, minf=4097 00:27:12.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:27:12.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.843 issued rwts: total=3359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.843 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.843 job6: (groupid=0, jobs=1): err= 0: pid=352240: Sat Jul 13 13:38:45 2024 00:27:12.843 read: IOPS=503, BW=126MiB/s (132MB/s)(1272MiB/10101msec) 00:27:12.843 slat (usec): min=8, max=171191, avg=840.62, stdev=7242.57 00:27:12.843 clat (usec): min=983, max=433460, avg=126160.22, stdev=101026.14 00:27:12.843 lat (usec): min=1041, max=444912, avg=127000.84, stdev=101764.86 00:27:12.843 clat percentiles (usec): 00:27:12.843 | 1.00th=[ 1614], 5.00th=[ 5080], 10.00th=[ 8979], 20.00th=[ 18744], 00:27:12.843 | 30.00th=[ 41681], 40.00th=[ 66323], 50.00th=[114820], 60.00th=[162530], 00:27:12.843 | 70.00th=[191890], 80.00th=[210764], 90.00th=[261096], 95.00th=[316670], 00:27:12.843 | 99.00th=[362808], 99.50th=[375391], 99.90th=[408945], 99.95th=[434111], 00:27:12.843 | 99.99th=[434111] 00:27:12.843 bw ( KiB/s): min=49664, max=215552, per=9.97%, avg=128580.25, stdev=47068.09, samples=20 00:27:12.843 iops : min= 194, max= 842, avg=502.25, stdev=183.88, samples=20 00:27:12.843 lat (usec) : 1000=0.02% 00:27:12.843 lat (msec) : 2=1.42%, 4=2.22%, 10=8.34%, 20=9.28%, 50=12.96% 00:27:12.843 lat (msec) : 100=12.70%, 250=41.45%, 500=11.62% 00:27:12.843 cpu : usr=0.21%, sys=1.36%, ctx=1192, majf=0, minf=4097 00:27:12.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:12.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.843 issued rwts: total=5086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.843 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.843 job7: (groupid=0, jobs=1): err= 0: pid=352244: Sat Jul 13 13:38:45 2024 00:27:12.843 read: IOPS=746, BW=187MiB/s (196MB/s)(1883MiB/10092msec) 00:27:12.843 slat (usec): min=12, max=101588, avg=1255.19, stdev=4746.04 00:27:12.843 clat (msec): min=2, max=231, avg=84.42, stdev=43.37 00:27:12.843 lat (msec): min=2, max=257, avg=85.68, stdev=44.14 00:27:12.843 clat percentiles (msec): 00:27:12.843 | 1.00th=[ 10], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 43], 00:27:12.843 | 30.00th=[ 54], 40.00th=[ 67], 50.00th=[ 81], 60.00th=[ 92], 00:27:12.843 | 70.00th=[ 106], 80.00th=[ 126], 90.00th=[ 148], 95.00th=[ 163], 00:27:12.843 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 203], 99.95th=[ 207], 00:27:12.843 | 99.99th=[ 232] 00:27:12.843 bw ( KiB/s): min=96768, max=373760, per=14.82%, avg=191197.00, stdev=75144.98, samples=20 00:27:12.843 iops : min= 378, max= 1460, avg=746.85, stdev=293.54, samples=20 00:27:12.843 lat (msec) : 4=0.19%, 10=0.84%, 20=2.66%, 50=24.47%, 100=37.47% 00:27:12.843 lat (msec) : 250=34.38% 00:27:12.843 cpu : usr=0.38%, sys=2.61%, ctx=1063, majf=0, minf=4097 00:27:12.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:12.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.843 issued rwts: total=7531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.843 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.843 job8: (groupid=0, jobs=1): err= 0: pid=352307: Sat Jul 13 13:38:45 2024 00:27:12.843 read: IOPS=355, BW=88.8MiB/s (93.1MB/s)(902MiB/10151msec) 00:27:12.843 slat (usec): min=9, max=215576, avg=2068.96, stdev=9132.10 00:27:12.843 clat (usec): min=1361, max=415809, avg=177937.22, stdev=85867.28 00:27:12.843 lat (usec): min=1385, max=510405, avg=180006.18, stdev=87268.47 00:27:12.843 clat percentiles (usec): 00:27:12.843 | 1.00th=[ 1631], 5.00th=[ 8848], 10.00th=[ 32900], 20.00th=[100140], 00:27:12.843 | 30.00th=[139461], 40.00th=[177210], 50.00th=[196084], 60.00th=[212861], 00:27:12.843 | 70.00th=[229639], 80.00th=[244319], 90.00th=[270533], 95.00th=[295699], 00:27:12.843 | 99.00th=[371196], 99.50th=[383779], 99.90th=[417334], 99.95th=[417334], 00:27:12.843 | 99.99th=[417334] 00:27:12.843 bw ( KiB/s): min=52224, max=224768, per=7.03%, avg=90692.50, stdev=38437.97, samples=20 00:27:12.843 iops : min= 204, max= 878, avg=354.25, stdev=150.15, samples=20 00:27:12.843 lat (msec) : 2=2.25%, 4=0.50%, 10=2.91%, 20=1.58%, 50=5.24% 00:27:12.843 lat (msec) : 100=7.43%, 250=63.20%, 500=16.89% 00:27:12.843 cpu : usr=0.23%, sys=1.06%, ctx=925, majf=0, minf=4097 00:27:12.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:12.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.843 issued rwts: total=3606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.843 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.843 job9: (groupid=0, jobs=1): err= 0: pid=352348: Sat Jul 13 13:38:45 2024 00:27:12.843 read: IOPS=554, BW=139MiB/s (145MB/s)(1400MiB/10098msec) 00:27:12.843 slat (usec): min=13, max=127522, avg=1592.67, stdev=6264.47 00:27:12.843 clat (msec): min=7, max=499, avg=113.69, stdev=74.86 00:27:12.843 lat (msec): min=7, max=499, avg=115.28, stdev=75.76 00:27:12.843 clat percentiles (msec): 00:27:12.843 | 1.00th=[ 23], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 48], 00:27:12.843 | 30.00th=[ 59], 40.00th=[ 72], 50.00th=[ 86], 60.00th=[ 112], 00:27:12.843 | 70.00th=[ 144], 80.00th=[ 190], 90.00th=[ 232], 95.00th=[ 249], 00:27:12.843 | 99.00th=[ 355], 99.50th=[ 372], 99.90th=[ 372], 99.95th=[ 372], 00:27:12.843 | 99.99th=[ 502] 00:27:12.843 bw ( KiB/s): min=62464, max=278528, per=10.99%, avg=141745.35, stdev=78759.86, samples=20 00:27:12.843 iops : min= 244, max= 1088, avg=553.65, stdev=307.58, samples=20 00:27:12.843 lat (msec) : 10=0.09%, 20=0.52%, 50=22.26%, 100=33.15%, 250=39.33% 00:27:12.843 lat (msec) : 500=4.64% 00:27:12.843 cpu : usr=0.36%, sys=1.93%, ctx=965, majf=0, minf=4097 00:27:12.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:12.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.843 issued rwts: total=5601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.843 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.843 job10: (groupid=0, jobs=1): err= 0: pid=352380: Sat Jul 13 13:38:45 2024 00:27:12.843 read: IOPS=474, BW=119MiB/s (124MB/s)(1205MiB/10154msec) 00:27:12.843 slat (usec): min=9, max=88385, avg=910.58, stdev=5210.77 00:27:12.843 clat (usec): min=1101, max=363379, avg=133821.39, stdev=78986.83 00:27:12.843 lat (usec): min=1132, max=363403, avg=134731.96, stdev=79521.22 00:27:12.843 clat percentiles (msec): 00:27:12.843 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 22], 20.00th=[ 46], 00:27:12.843 | 30.00th=[ 82], 40.00th=[ 120], 50.00th=[ 144], 60.00th=[ 159], 00:27:12.843 | 70.00th=[ 176], 80.00th=[ 203], 90.00th=[ 239], 95.00th=[ 259], 00:27:12.843 | 99.00th=[ 305], 99.50th=[ 317], 99.90th=[ 363], 99.95th=[ 363], 00:27:12.843 | 99.99th=[ 363] 00:27:12.843 bw ( KiB/s): min=70656, max=274981, per=9.44%, avg=121771.45, stdev=45247.60, samples=20 00:27:12.843 iops : min= 276, max= 1074, avg=475.65, stdev=176.73, samples=20 00:27:12.843 lat (msec) : 2=0.25%, 4=0.81%, 10=4.67%, 20=3.82%, 50=11.37% 00:27:12.843 lat (msec) : 100=13.74%, 250=58.54%, 500=6.81% 00:27:12.843 cpu : usr=0.27%, sys=1.25%, ctx=1069, majf=0, minf=4097 00:27:12.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:12.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:12.843 issued rwts: total=4819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.843 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:12.843 00:27:12.843 Run status group 0 (all jobs): 00:27:12.844 READ: bw=1260MiB/s (1321MB/s), 77.8MiB/s-187MiB/s (81.5MB/s-196MB/s), io=12.5GiB (13.5GB), run=10092-10196msec 00:27:12.844 00:27:12.844 Disk stats (read/write): 00:27:12.844 nvme0n1: ios=9058/0, merge=0/0, ticks=1247904/0, in_queue=1247904, util=95.40% 00:27:12.844 nvme10n1: ios=8327/0, merge=0/0, ticks=1236396/0, in_queue=1236396, util=95.74% 00:27:12.844 nvme1n1: ios=8388/0, merge=0/0, ticks=1240273/0, in_queue=1240273, util=96.25% 00:27:12.844 nvme2n1: ios=6209/0, merge=0/0, ticks=1230142/0, in_queue=1230142, util=96.53% 00:27:12.844 nvme3n1: ios=10073/0, merge=0/0, ticks=1241329/0, in_queue=1241329, util=96.66% 00:27:12.844 nvme4n1: ios=6596/0, merge=0/0, ticks=1236193/0, in_queue=1236193, util=97.29% 00:27:12.844 nvme5n1: ios=10044/0, merge=0/0, ticks=1248011/0, in_queue=1248011, util=97.61% 00:27:12.844 nvme6n1: ios=14928/0, merge=0/0, ticks=1244166/0, in_queue=1244166, util=97.81% 00:27:12.844 nvme7n1: ios=7085/0, merge=0/0, ticks=1237583/0, in_queue=1237583, util=98.60% 00:27:12.844 nvme8n1: ios=11075/0, merge=0/0, ticks=1238990/0, in_queue=1238990, util=98.96% 00:27:12.844 nvme9n1: ios=9501/0, merge=0/0, ticks=1243897/0, in_queue=1243897, util=99.21% 00:27:12.844 13:38:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:12.844 [global] 00:27:12.844 thread=1 00:27:12.844 invalidate=1 00:27:12.844 rw=randwrite 00:27:12.844 time_based=1 00:27:12.844 runtime=10 00:27:12.844 ioengine=libaio 00:27:12.844 direct=1 00:27:12.844 bs=262144 00:27:12.844 iodepth=64 00:27:12.844 norandommap=1 00:27:12.844 numjobs=1 00:27:12.844 00:27:12.844 [job0] 00:27:12.844 filename=/dev/nvme0n1 00:27:12.844 [job1] 00:27:12.844 filename=/dev/nvme10n1 00:27:12.844 [job2] 00:27:12.844 filename=/dev/nvme1n1 00:27:12.844 [job3] 00:27:12.844 filename=/dev/nvme2n1 00:27:12.844 [job4] 00:27:12.844 filename=/dev/nvme3n1 00:27:12.844 [job5] 00:27:12.844 filename=/dev/nvme4n1 00:27:12.844 [job6] 00:27:12.844 filename=/dev/nvme5n1 00:27:12.844 [job7] 00:27:12.844 filename=/dev/nvme6n1 00:27:12.844 [job8] 00:27:12.844 filename=/dev/nvme7n1 00:27:12.844 [job9] 00:27:12.844 filename=/dev/nvme8n1 00:27:12.844 [job10] 00:27:12.844 filename=/dev/nvme9n1 00:27:12.844 Could not set queue depth (nvme0n1) 00:27:12.844 Could not set queue depth (nvme10n1) 00:27:12.844 Could not set queue depth (nvme1n1) 00:27:12.844 Could not set queue depth (nvme2n1) 00:27:12.844 Could not set queue depth (nvme3n1) 00:27:12.844 Could not set queue depth (nvme4n1) 00:27:12.844 Could not set queue depth (nvme5n1) 00:27:12.844 Could not set queue depth (nvme6n1) 00:27:12.844 Could not set queue depth (nvme7n1) 00:27:12.844 Could not set queue depth (nvme8n1) 00:27:12.844 Could not set queue depth (nvme9n1) 00:27:12.844 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.844 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.844 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.844 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.844 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.844 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.844 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.844 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.844 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.844 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.844 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:12.844 fio-3.35 00:27:12.844 Starting 11 threads 00:27:22.820 00:27:22.820 job0: (groupid=0, jobs=1): err= 0: pid=353408: Sat Jul 13 13:38:56 2024 00:27:22.820 write: IOPS=428, BW=107MiB/s (112MB/s)(1086MiB/10127msec); 0 zone resets 00:27:22.820 slat (usec): min=20, max=102194, avg=1662.12, stdev=4988.00 00:27:22.820 clat (msec): min=2, max=396, avg=147.54, stdev=97.62 00:27:22.820 lat (msec): min=2, max=396, avg=149.21, stdev=99.02 00:27:22.820 clat percentiles (msec): 00:27:22.820 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 37], 20.00th=[ 63], 00:27:22.820 | 30.00th=[ 86], 40.00th=[ 93], 50.00th=[ 104], 60.00th=[ 167], 00:27:22.820 | 70.00th=[ 209], 80.00th=[ 253], 90.00th=[ 296], 95.00th=[ 326], 00:27:22.820 | 99.00th=[ 363], 99.50th=[ 380], 99.90th=[ 393], 99.95th=[ 397], 00:27:22.820 | 99.99th=[ 397] 00:27:22.820 bw ( KiB/s): min=47104, max=217088, per=9.39%, avg=109535.25, stdev=55217.58, samples=20 00:27:22.820 iops : min= 184, max= 848, avg=427.80, stdev=215.65, samples=20 00:27:22.820 lat (msec) : 4=0.12%, 10=1.22%, 20=2.95%, 50=11.35%, 100=33.74% 00:27:22.820 lat (msec) : 250=30.12%, 500=20.50% 00:27:22.820 cpu : usr=1.21%, sys=1.41%, ctx=2539, majf=0, minf=1 00:27:22.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:27:22.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.820 issued rwts: total=0,4342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.820 job1: (groupid=0, jobs=1): err= 0: pid=353415: Sat Jul 13 13:38:56 2024 00:27:22.820 write: IOPS=470, BW=118MiB/s (123MB/s)(1208MiB/10263msec); 0 zone resets 00:27:22.820 slat (usec): min=23, max=128639, avg=1626.46, stdev=4469.04 00:27:22.820 clat (msec): min=3, max=436, avg=134.15, stdev=81.45 00:27:22.820 lat (msec): min=3, max=436, avg=135.78, stdev=82.36 00:27:22.820 clat percentiles (msec): 00:27:22.820 | 1.00th=[ 19], 5.00th=[ 42], 10.00th=[ 50], 20.00th=[ 59], 00:27:22.820 | 30.00th=[ 69], 40.00th=[ 110], 50.00th=[ 117], 60.00th=[ 129], 00:27:22.820 | 70.00th=[ 163], 80.00th=[ 201], 90.00th=[ 268], 95.00th=[ 296], 00:27:22.820 | 99.00th=[ 338], 99.50th=[ 372], 99.90th=[ 426], 99.95th=[ 426], 00:27:22.820 | 99.99th=[ 435] 00:27:22.820 bw ( KiB/s): min=55296, max=254464, per=10.46%, avg=122043.20, stdev=59577.18, samples=20 00:27:22.820 iops : min= 216, max= 994, avg=476.70, stdev=232.74, samples=20 00:27:22.820 lat (msec) : 4=0.02%, 10=0.29%, 20=0.87%, 50=9.21%, 100=26.21% 00:27:22.820 lat (msec) : 250=50.34%, 500=13.06% 00:27:22.820 cpu : usr=1.61%, sys=1.73%, ctx=2327, majf=0, minf=1 00:27:22.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:22.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.820 issued rwts: total=0,4831,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.820 job2: (groupid=0, jobs=1): err= 0: pid=353420: Sat Jul 13 13:38:56 2024 00:27:22.820 write: IOPS=404, BW=101MiB/s (106MB/s)(1037MiB/10253msec); 0 zone resets 00:27:22.820 slat (usec): min=25, max=61986, avg=1970.14, stdev=4701.39 00:27:22.820 clat (msec): min=3, max=531, avg=155.97, stdev=79.06 00:27:22.820 lat (msec): min=3, max=531, avg=157.94, stdev=80.14 00:27:22.820 clat percentiles (msec): 00:27:22.820 | 1.00th=[ 15], 5.00th=[ 42], 10.00th=[ 68], 20.00th=[ 105], 00:27:22.820 | 30.00th=[ 117], 40.00th=[ 126], 50.00th=[ 138], 60.00th=[ 153], 00:27:22.820 | 70.00th=[ 169], 80.00th=[ 234], 90.00th=[ 284], 95.00th=[ 296], 00:27:22.820 | 99.00th=[ 384], 99.50th=[ 456], 99.90th=[ 518], 99.95th=[ 518], 00:27:22.820 | 99.99th=[ 531] 00:27:22.820 bw ( KiB/s): min=53248, max=177152, per=8.97%, avg=104600.65, stdev=37420.04, samples=20 00:27:22.820 iops : min= 208, max= 692, avg=408.55, stdev=146.16, samples=20 00:27:22.820 lat (msec) : 4=0.02%, 10=0.43%, 20=1.06%, 50=5.06%, 100=11.38% 00:27:22.820 lat (msec) : 250=65.90%, 500=15.91%, 750=0.24% 00:27:22.820 cpu : usr=1.35%, sys=1.48%, ctx=1931, majf=0, minf=1 00:27:22.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:22.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.820 issued rwts: total=0,4149,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.820 job3: (groupid=0, jobs=1): err= 0: pid=353429: Sat Jul 13 13:38:56 2024 00:27:22.820 write: IOPS=675, BW=169MiB/s (177MB/s)(1712MiB/10144msec); 0 zone resets 00:27:22.820 slat (usec): min=15, max=43442, avg=1094.79, stdev=2697.43 00:27:22.820 clat (msec): min=2, max=338, avg=93.67, stdev=56.24 00:27:22.820 lat (msec): min=2, max=338, avg=94.76, stdev=56.79 00:27:22.820 clat percentiles (msec): 00:27:22.820 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 34], 20.00th=[ 53], 00:27:22.820 | 30.00th=[ 58], 40.00th=[ 80], 50.00th=[ 92], 60.00th=[ 96], 00:27:22.820 | 70.00th=[ 102], 80.00th=[ 128], 90.00th=[ 148], 95.00th=[ 207], 00:27:22.820 | 99.00th=[ 292], 99.50th=[ 317], 99.90th=[ 334], 99.95th=[ 338], 00:27:22.820 | 99.99th=[ 338] 00:27:22.820 bw ( KiB/s): min=99840, max=292352, per=14.89%, avg=173680.35, stdev=46470.69, samples=20 00:27:22.820 iops : min= 390, max= 1142, avg=678.40, stdev=181.54, samples=20 00:27:22.820 lat (msec) : 4=0.34%, 10=1.52%, 20=3.90%, 50=11.17%, 100=52.29% 00:27:22.820 lat (msec) : 250=27.23%, 500=3.55% 00:27:22.820 cpu : usr=2.31%, sys=2.04%, ctx=3154, majf=0, minf=1 00:27:22.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:22.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.820 issued rwts: total=0,6848,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.820 job4: (groupid=0, jobs=1): err= 0: pid=353430: Sat Jul 13 13:38:56 2024 00:27:22.820 write: IOPS=299, BW=75.0MiB/s (78.6MB/s)(766MiB/10218msec); 0 zone resets 00:27:22.820 slat (usec): min=17, max=65619, avg=3002.84, stdev=6434.80 00:27:22.820 clat (msec): min=7, max=497, avg=210.30, stdev=75.85 00:27:22.820 lat (msec): min=9, max=497, avg=213.31, stdev=76.69 00:27:22.820 clat percentiles (msec): 00:27:22.820 | 1.00th=[ 28], 5.00th=[ 99], 10.00th=[ 117], 20.00th=[ 127], 00:27:22.820 | 30.00th=[ 157], 40.00th=[ 199], 50.00th=[ 226], 60.00th=[ 247], 00:27:22.820 | 70.00th=[ 264], 80.00th=[ 275], 90.00th=[ 292], 95.00th=[ 313], 00:27:22.820 | 99.00th=[ 342], 99.50th=[ 430], 99.90th=[ 481], 99.95th=[ 493], 00:27:22.820 | 99.99th=[ 498] 00:27:22.820 bw ( KiB/s): min=53248, max=129024, per=6.59%, avg=76819.25, stdev=24549.58, samples=20 00:27:22.820 iops : min= 208, max= 504, avg=300.05, stdev=95.91, samples=20 00:27:22.820 lat (msec) : 10=0.10%, 20=0.39%, 50=2.06%, 100=2.51%, 250=56.98% 00:27:22.820 lat (msec) : 500=37.96% 00:27:22.820 cpu : usr=1.07%, sys=0.95%, ctx=1079, majf=0, minf=1 00:27:22.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:27:22.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.820 issued rwts: total=0,3064,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.820 job5: (groupid=0, jobs=1): err= 0: pid=353431: Sat Jul 13 13:38:56 2024 00:27:22.820 write: IOPS=465, BW=116MiB/s (122MB/s)(1180MiB/10130msec); 0 zone resets 00:27:22.820 slat (usec): min=23, max=209840, avg=1797.45, stdev=5712.75 00:27:22.820 clat (msec): min=3, max=511, avg=135.33, stdev=68.45 00:27:22.820 lat (msec): min=3, max=511, avg=137.13, stdev=69.23 00:27:22.820 clat percentiles (msec): 00:27:22.820 | 1.00th=[ 25], 5.00th=[ 60], 10.00th=[ 68], 20.00th=[ 75], 00:27:22.820 | 30.00th=[ 99], 40.00th=[ 110], 50.00th=[ 121], 60.00th=[ 136], 00:27:22.821 | 70.00th=[ 155], 80.00th=[ 186], 90.00th=[ 228], 95.00th=[ 257], 00:27:22.821 | 99.00th=[ 380], 99.50th=[ 443], 99.90th=[ 498], 99.95th=[ 498], 00:27:22.821 | 99.99th=[ 510] 00:27:22.821 bw ( KiB/s): min=52224, max=209408, per=10.22%, avg=119216.10, stdev=42144.25, samples=20 00:27:22.821 iops : min= 204, max= 818, avg=465.65, stdev=164.65, samples=20 00:27:22.821 lat (msec) : 4=0.04%, 10=0.28%, 20=0.36%, 50=2.88%, 100=27.50% 00:27:22.821 lat (msec) : 250=62.82%, 500=6.10%, 750=0.02% 00:27:22.821 cpu : usr=1.38%, sys=1.77%, ctx=1775, majf=0, minf=1 00:27:22.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:22.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.821 issued rwts: total=0,4720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.821 job6: (groupid=0, jobs=1): err= 0: pid=353432: Sat Jul 13 13:38:56 2024 00:27:22.821 write: IOPS=309, BW=77.4MiB/s (81.1MB/s)(794MiB/10258msec); 0 zone resets 00:27:22.821 slat (usec): min=17, max=82834, avg=2389.00, stdev=6433.18 00:27:22.821 clat (msec): min=3, max=482, avg=204.26, stdev=97.83 00:27:22.821 lat (msec): min=3, max=482, avg=206.65, stdev=99.17 00:27:22.821 clat percentiles (msec): 00:27:22.821 | 1.00th=[ 11], 5.00th=[ 33], 10.00th=[ 59], 20.00th=[ 102], 00:27:22.821 | 30.00th=[ 155], 40.00th=[ 190], 50.00th=[ 222], 60.00th=[ 249], 00:27:22.821 | 70.00th=[ 271], 80.00th=[ 288], 90.00th=[ 317], 95.00th=[ 342], 00:27:22.821 | 99.00th=[ 380], 99.50th=[ 439], 99.90th=[ 472], 99.95th=[ 481], 00:27:22.821 | 99.99th=[ 481] 00:27:22.821 bw ( KiB/s): min=51200, max=117760, per=6.83%, avg=79660.45, stdev=22043.03, samples=20 00:27:22.821 iops : min= 200, max= 460, avg=311.15, stdev=86.12, samples=20 00:27:22.821 lat (msec) : 4=0.06%, 10=0.79%, 20=2.24%, 50=5.23%, 100=11.37% 00:27:22.821 lat (msec) : 250=40.47%, 500=39.84% 00:27:22.821 cpu : usr=0.91%, sys=0.93%, ctx=1728, majf=0, minf=1 00:27:22.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:22.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.821 issued rwts: total=0,3175,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.821 job7: (groupid=0, jobs=1): err= 0: pid=353433: Sat Jul 13 13:38:56 2024 00:27:22.821 write: IOPS=404, BW=101MiB/s (106MB/s)(1025MiB/10146msec); 0 zone resets 00:27:22.821 slat (usec): min=21, max=183312, avg=1768.04, stdev=5656.11 00:27:22.821 clat (msec): min=2, max=581, avg=156.58, stdev=78.27 00:27:22.821 lat (msec): min=2, max=581, avg=158.35, stdev=79.15 00:27:22.821 clat percentiles (msec): 00:27:22.821 | 1.00th=[ 15], 5.00th=[ 35], 10.00th=[ 61], 20.00th=[ 97], 00:27:22.821 | 30.00th=[ 120], 40.00th=[ 127], 50.00th=[ 144], 60.00th=[ 165], 00:27:22.821 | 70.00th=[ 186], 80.00th=[ 222], 90.00th=[ 259], 95.00th=[ 296], 00:27:22.821 | 99.00th=[ 384], 99.50th=[ 409], 99.90th=[ 439], 99.95th=[ 575], 00:27:22.821 | 99.99th=[ 584] 00:27:22.821 bw ( KiB/s): min=53248, max=203776, per=8.86%, avg=103297.90, stdev=35092.96, samples=20 00:27:22.821 iops : min= 208, max= 796, avg=403.50, stdev=137.08, samples=20 00:27:22.821 lat (msec) : 4=0.02%, 10=0.32%, 20=1.59%, 50=5.78%, 100=13.20% 00:27:22.821 lat (msec) : 250=67.31%, 500=11.71%, 750=0.07% 00:27:22.821 cpu : usr=1.22%, sys=1.57%, ctx=2146, majf=0, minf=1 00:27:22.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:22.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.821 issued rwts: total=0,4099,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.821 job8: (groupid=0, jobs=1): err= 0: pid=353434: Sat Jul 13 13:38:56 2024 00:27:22.821 write: IOPS=343, BW=85.8MiB/s (90.0MB/s)(873MiB/10169msec); 0 zone resets 00:27:22.821 slat (usec): min=21, max=93615, avg=2287.23, stdev=6266.68 00:27:22.821 clat (msec): min=4, max=376, avg=183.94, stdev=91.25 00:27:22.821 lat (msec): min=6, max=376, avg=186.23, stdev=92.42 00:27:22.821 clat percentiles (msec): 00:27:22.821 | 1.00th=[ 12], 5.00th=[ 29], 10.00th=[ 63], 20.00th=[ 115], 00:27:22.821 | 30.00th=[ 132], 40.00th=[ 140], 50.00th=[ 171], 60.00th=[ 218], 00:27:22.821 | 70.00th=[ 247], 80.00th=[ 271], 90.00th=[ 313], 95.00th=[ 330], 00:27:22.821 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 368], 99.95th=[ 376], 00:27:22.821 | 99.99th=[ 376] 00:27:22.821 bw ( KiB/s): min=47104, max=149504, per=7.52%, avg=87731.75, stdev=31859.87, samples=20 00:27:22.821 iops : min= 184, max= 584, avg=342.65, stdev=124.45, samples=20 00:27:22.821 lat (msec) : 10=0.74%, 20=2.26%, 50=5.59%, 100=9.28%, 250=53.64% 00:27:22.821 lat (msec) : 500=28.48% 00:27:22.821 cpu : usr=0.96%, sys=1.12%, ctx=1745, majf=0, minf=1 00:27:22.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:27:22.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.821 issued rwts: total=0,3490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.821 job9: (groupid=0, jobs=1): err= 0: pid=353435: Sat Jul 13 13:38:56 2024 00:27:22.821 write: IOPS=429, BW=107MiB/s (113MB/s)(1102MiB/10261msec); 0 zone resets 00:27:22.821 slat (usec): min=20, max=96391, avg=1857.93, stdev=4978.34 00:27:22.821 clat (msec): min=3, max=488, avg=147.09, stdev=85.45 00:27:22.821 lat (msec): min=3, max=488, avg=148.95, stdev=86.62 00:27:22.821 clat percentiles (msec): 00:27:22.821 | 1.00th=[ 16], 5.00th=[ 42], 10.00th=[ 51], 20.00th=[ 55], 00:27:22.821 | 30.00th=[ 87], 40.00th=[ 117], 50.00th=[ 133], 60.00th=[ 159], 00:27:22.821 | 70.00th=[ 205], 80.00th=[ 236], 90.00th=[ 264], 95.00th=[ 279], 00:27:22.821 | 99.00th=[ 355], 99.50th=[ 401], 99.90th=[ 472], 99.95th=[ 472], 00:27:22.821 | 99.99th=[ 489] 00:27:22.821 bw ( KiB/s): min=55296, max=285184, per=9.53%, avg=111125.05, stdev=57601.00, samples=20 00:27:22.821 iops : min= 216, max= 1114, avg=434.05, stdev=224.97, samples=20 00:27:22.821 lat (msec) : 4=0.05%, 10=0.41%, 20=1.25%, 50=7.58%, 100=26.15% 00:27:22.821 lat (msec) : 250=49.41%, 500=15.16% 00:27:22.821 cpu : usr=1.26%, sys=1.45%, ctx=2081, majf=0, minf=1 00:27:22.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:22.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.821 issued rwts: total=0,4406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.821 job10: (groupid=0, jobs=1): err= 0: pid=353436: Sat Jul 13 13:38:56 2024 00:27:22.821 write: IOPS=354, BW=88.6MiB/s (92.9MB/s)(909MiB/10256msec); 0 zone resets 00:27:22.821 slat (usec): min=21, max=50414, avg=2428.89, stdev=5419.24 00:27:22.821 clat (msec): min=3, max=496, avg=178.04, stdev=90.74 00:27:22.821 lat (msec): min=3, max=496, avg=180.47, stdev=91.93 00:27:22.821 clat percentiles (msec): 00:27:22.821 | 1.00th=[ 18], 5.00th=[ 35], 10.00th=[ 64], 20.00th=[ 110], 00:27:22.821 | 30.00th=[ 116], 40.00th=[ 121], 50.00th=[ 163], 60.00th=[ 207], 00:27:22.821 | 70.00th=[ 249], 80.00th=[ 271], 90.00th=[ 292], 95.00th=[ 317], 00:27:22.821 | 99.00th=[ 405], 99.50th=[ 422], 99.90th=[ 481], 99.95th=[ 498], 00:27:22.821 | 99.99th=[ 498] 00:27:22.821 bw ( KiB/s): min=51200, max=185856, per=7.84%, avg=91430.25, stdev=36411.78, samples=20 00:27:22.821 iops : min= 200, max= 726, avg=357.10, stdev=142.28, samples=20 00:27:22.821 lat (msec) : 4=0.03%, 10=0.22%, 20=0.96%, 50=7.68%, 100=6.85% 00:27:22.821 lat (msec) : 250=54.55%, 500=29.71% 00:27:22.821 cpu : usr=1.29%, sys=1.09%, ctx=1482, majf=0, minf=1 00:27:22.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:22.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:22.821 issued rwts: total=0,3635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:22.821 00:27:22.821 Run status group 0 (all jobs): 00:27:22.821 WRITE: bw=1139MiB/s (1194MB/s), 75.0MiB/s-169MiB/s (78.6MB/s-177MB/s), io=11.4GiB (12.3GB), run=10127-10263msec 00:27:22.821 00:27:22.821 Disk stats (read/write): 00:27:22.821 nvme0n1: ios=49/8499, merge=0/0, ticks=89/1212700, in_queue=1212789, util=97.59% 00:27:22.821 nvme10n1: ios=47/9597, merge=0/0, ticks=2413/1235409, in_queue=1237822, util=99.57% 00:27:22.821 nvme1n1: ios=49/8247, merge=0/0, ticks=2670/1232303, in_queue=1234973, util=99.91% 00:27:22.821 nvme2n1: ios=32/13507, merge=0/0, ticks=128/1212758, in_queue=1212886, util=98.26% 00:27:22.821 nvme3n1: ios=20/6095, merge=0/0, ticks=37/1229299, in_queue=1229336, util=97.86% 00:27:22.821 nvme4n1: ios=47/9250, merge=0/0, ticks=7618/1175335, in_queue=1182953, util=99.88% 00:27:22.821 nvme5n1: ios=0/6296, merge=0/0, ticks=0/1236360, in_queue=1236360, util=98.36% 00:27:22.821 nvme6n1: ios=0/8011, merge=0/0, ticks=0/1214392, in_queue=1214392, util=98.41% 00:27:22.821 nvme7n1: ios=43/6976, merge=0/0, ticks=2323/1238406, in_queue=1240729, util=99.91% 00:27:22.821 nvme8n1: ios=0/8752, merge=0/0, ticks=0/1233108, in_queue=1233108, util=99.01% 00:27:22.821 nvme9n1: ios=0/7216, merge=0/0, ticks=0/1230040, in_queue=1230040, util=99.12% 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:22.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:22.821 13:38:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:23.387 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:23.387 13:38:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:23.387 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:23.387 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:23.387 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:27:23.387 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:23.387 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:27:23.387 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:23.387 13:38:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:23.387 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.387 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:23.387 13:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.387 13:38:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:23.387 13:38:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:23.645 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:23.645 13:38:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:23.645 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:23.645 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:23.645 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:27:23.645 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:23.645 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:27:23.645 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:23.645 13:38:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:23.645 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.645 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:23.645 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.645 13:38:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:23.645 13:38:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:24.210 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:24.210 13:38:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:24.210 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:24.210 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:24.210 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:27:24.210 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:24.210 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:27:24.210 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:24.210 13:38:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:24.210 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.210 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.210 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.210 13:38:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.210 13:38:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:24.468 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:24.468 13:38:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:24.468 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:24.468 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:24.468 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:27:24.468 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:24.468 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:27:24.468 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:24.468 13:38:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:24.468 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.468 13:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.468 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.468 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.468 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:24.468 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:24.468 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:24.468 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:24.468 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:24.468 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:27:24.468 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:24.468 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:27:24.468 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:24.468 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:24.468 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.468 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.726 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.726 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.726 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:24.726 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:24.726 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:24.727 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:24.727 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:24.727 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:27:24.727 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:24.727 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:27:24.727 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:24.727 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:24.727 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.727 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.727 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.727 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.727 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:24.984 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:24.984 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:24.984 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:24.984 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:24.984 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:27:24.984 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:24.984 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:27:24.984 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:24.984 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:24.984 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.984 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.984 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.984 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.984 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:25.242 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:25.242 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:25.242 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:25.242 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:25.242 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:27:25.242 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:25.242 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:27:25.242 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:25.242 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:25.242 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.242 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:25.242 13:38:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.242 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:25.242 13:38:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:25.500 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:25.500 13:39:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:25.500 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:25.500 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:25.500 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:27:25.500 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:25.500 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:27:25.500 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:25.500 13:39:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:25.500 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.500 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:25.500 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.500 13:39:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:25.500 13:39:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:25.764 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:25.764 rmmod nvme_tcp 00:27:25.764 rmmod nvme_fabrics 00:27:25.764 rmmod nvme_keyring 00:27:25.764 13:39:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:25.765 13:39:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:27:25.765 13:39:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:27:25.765 13:39:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 347819 ']' 00:27:25.765 13:39:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 347819 00:27:25.765 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 347819 ']' 00:27:25.765 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 347819 00:27:25.765 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:27:25.765 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:25.765 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 347819 00:27:25.765 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:25.765 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:25.765 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 347819' 00:27:25.765 killing process with pid 347819 00:27:25.765 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 347819 00:27:25.765 13:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 347819 00:27:29.114 13:39:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:29.114 13:39:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:29.114 13:39:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:29.114 13:39:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:29.114 13:39:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:29.114 13:39:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.114 13:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.114 13:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.014 13:39:05 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:31.014 00:27:31.014 real 1m6.075s 00:27:31.014 user 3m40.479s 00:27:31.014 sys 0m22.725s 00:27:31.014 13:39:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:31.014 13:39:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:31.014 ************************************ 00:27:31.014 END TEST nvmf_multiconnection 00:27:31.014 ************************************ 00:27:31.014 13:39:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:31.014 13:39:05 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:31.014 13:39:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:31.014 13:39:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.014 13:39:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:31.014 ************************************ 00:27:31.014 START TEST nvmf_initiator_timeout 00:27:31.014 ************************************ 00:27:31.014 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:31.273 * Looking for test storage... 00:27:31.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.273 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:27:31.274 13:39:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:33.177 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:33.177 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:33.177 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:33.177 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:33.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:27:33.177 00:27:33.177 --- 10.0.0.2 ping statistics --- 00:27:33.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.177 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:27:33.177 00:27:33.177 --- 10.0.0.1 ping statistics --- 00:27:33.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.177 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.177 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:33.178 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:33.178 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:33.178 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:33.178 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:33.178 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:33.436 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=357519 00:27:33.436 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:33.436 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 357519 00:27:33.436 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 357519 ']' 00:27:33.436 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.436 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:33.436 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.436 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:33.436 13:39:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:33.436 [2024-07-13 13:39:08.007941] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:33.436 [2024-07-13 13:39:08.008078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.436 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.436 [2024-07-13 13:39:08.149339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:33.694 [2024-07-13 13:39:08.409125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.694 [2024-07-13 13:39:08.409208] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.694 [2024-07-13 13:39:08.409236] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.694 [2024-07-13 13:39:08.409257] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.694 [2024-07-13 13:39:08.409278] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.694 [2024-07-13 13:39:08.409403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.694 [2024-07-13 13:39:08.409487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.694 [2024-07-13 13:39:08.409554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.694 [2024-07-13 13:39:08.409566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.261 13:39:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:34.261 13:39:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:27:34.261 13:39:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:34.261 13:39:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:34.261 13:39:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:34.261 13:39:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.261 13:39:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:34.261 13:39:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:34.261 13:39:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.261 13:39:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 Malloc0 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 Delay0 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 [2024-07-13 13:39:09.031593] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 [2024-07-13 13:39:09.060577] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.519 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:35.085 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:35.085 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:27:35.085 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:35.085 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:35.085 13:39:09 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:27:36.985 13:39:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:36.985 13:39:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:36.985 13:39:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:36.985 13:39:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:36.985 13:39:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:36.985 13:39:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:27:36.985 13:39:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=358212 00:27:36.985 13:39:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:36.985 13:39:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:36.985 [global] 00:27:36.985 thread=1 00:27:36.985 invalidate=1 00:27:36.985 rw=write 00:27:36.985 time_based=1 00:27:36.985 runtime=60 00:27:36.985 ioengine=libaio 00:27:36.985 direct=1 00:27:36.985 bs=4096 00:27:36.985 iodepth=1 00:27:36.985 norandommap=0 00:27:36.985 numjobs=1 00:27:36.985 00:27:36.985 verify_dump=1 00:27:36.985 verify_backlog=512 00:27:36.985 verify_state_save=0 00:27:36.985 do_verify=1 00:27:36.985 verify=crc32c-intel 00:27:36.985 [job0] 00:27:36.985 filename=/dev/nvme0n1 00:27:36.985 Could not set queue depth (nvme0n1) 00:27:37.242 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:37.243 fio-3.35 00:27:37.243 Starting 1 thread 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.521 true 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.521 true 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.521 true 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.521 true 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.521 13:39:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:43.049 true 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:43.049 true 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:43.049 true 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:43.049 true 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:43.049 13:39:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 358212 00:28:39.275 00:28:39.275 job0: (groupid=0, jobs=1): err= 0: pid=358284: Sat Jul 13 13:40:12 2024 00:28:39.275 read: IOPS=48, BW=192KiB/s (197kB/s)(11.3MiB/60013msec) 00:28:39.275 slat (usec): min=6, max=9714, avg=22.40, stdev=228.21 00:28:39.275 clat (usec): min=412, max=41102k, avg=20307.74, stdev=765500.87 00:28:39.275 lat (usec): min=419, max=41102k, avg=20330.14, stdev=765500.78 00:28:39.275 clat percentiles (usec): 00:28:39.275 | 1.00th=[ 441], 5.00th=[ 457], 10.00th=[ 465], 00:28:39.275 | 20.00th=[ 482], 30.00th=[ 490], 40.00th=[ 502], 00:28:39.275 | 50.00th=[ 519], 60.00th=[ 553], 70.00th=[ 570], 00:28:39.275 | 80.00th=[ 611], 90.00th=[ 41157], 95.00th=[ 41157], 00:28:39.275 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 44827], 00:28:39.275 | 99.95th=[ 45876], 99.99th=[17112761] 00:28:39.275 write: IOPS=51, BW=205KiB/s (210kB/s)(12.0MiB/60013msec); 0 zone resets 00:28:39.275 slat (usec): min=7, max=30401, avg=34.23, stdev=548.22 00:28:39.275 clat (usec): min=274, max=662, avg=410.97, stdev=72.39 00:28:39.275 lat (usec): min=284, max=30869, avg=445.20, stdev=554.82 00:28:39.275 clat percentiles (usec): 00:28:39.275 | 1.00th=[ 285], 5.00th=[ 306], 10.00th=[ 322], 20.00th=[ 338], 00:28:39.275 | 30.00th=[ 359], 40.00th=[ 388], 50.00th=[ 408], 60.00th=[ 433], 00:28:39.275 | 70.00th=[ 453], 80.00th=[ 474], 90.00th=[ 510], 95.00th=[ 529], 00:28:39.275 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[ 603], 99.95th=[ 619], 00:28:39.275 | 99.99th=[ 660] 00:28:39.275 bw ( KiB/s): min= 736, max= 4552, per=100.00%, avg=3510.86, stdev=1324.24, samples=7 00:28:39.275 iops : min= 184, max= 1138, avg=877.71, stdev=331.06, samples=7 00:28:39.275 lat (usec) : 500=62.97%, 750=30.34%, 1000=0.08% 00:28:39.275 lat (msec) : 2=0.02%, 50=6.57%, >=2000=0.02% 00:28:39.275 cpu : usr=0.16%, sys=0.25%, ctx=5960, majf=0, minf=2 00:28:39.275 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:39.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.275 issued rwts: total=2883,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:39.275 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:39.275 00:28:39.275 Run status group 0 (all jobs): 00:28:39.275 READ: bw=192KiB/s (197kB/s), 192KiB/s-192KiB/s (197kB/s-197kB/s), io=11.3MiB (11.8MB), run=60013-60013msec 00:28:39.275 WRITE: bw=205KiB/s (210kB/s), 205KiB/s-205KiB/s (210kB/s-210kB/s), io=12.0MiB (12.6MB), run=60013-60013msec 00:28:39.275 00:28:39.275 Disk stats (read/write): 00:28:39.275 nvme0n1: ios=2932/3072, merge=0/0, ticks=18567/1167, in_queue=19734, util=99.69% 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:39.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:39.275 nvmf hotplug test: fio successful as expected 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:39.275 rmmod nvme_tcp 00:28:39.275 rmmod nvme_fabrics 00:28:39.275 rmmod nvme_keyring 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 357519 ']' 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 357519 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 357519 ']' 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 357519 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 357519 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 357519' 00:28:39.275 killing process with pid 357519 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 357519 00:28:39.275 13:40:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 357519 00:28:39.275 13:40:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:39.275 13:40:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:39.275 13:40:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:39.275 13:40:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:39.275 13:40:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:39.275 13:40:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.275 13:40:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:39.275 13:40:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.177 13:40:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:41.177 00:28:41.177 real 1m9.963s 00:28:41.177 user 4m14.895s 00:28:41.177 sys 0m6.908s 00:28:41.177 13:40:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:41.177 13:40:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:41.177 ************************************ 00:28:41.177 END TEST nvmf_initiator_timeout 00:28:41.177 ************************************ 00:28:41.177 13:40:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:41.177 13:40:15 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:28:41.177 13:40:15 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:28:41.177 13:40:15 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:28:41.177 13:40:15 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:28:41.177 13:40:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.075 13:40:17 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.075 13:40:17 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:43.076 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:43.076 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:43.076 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:43.076 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:28:43.076 13:40:17 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:43.076 13:40:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:43.076 13:40:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.076 13:40:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.076 ************************************ 00:28:43.076 START TEST nvmf_perf_adq 00:28:43.076 ************************************ 00:28:43.076 13:40:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:43.333 * Looking for test storage... 00:28:43.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:43.333 13:40:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.333 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:43.333 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.333 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.333 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.333 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:43.334 13:40:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:45.236 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.236 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:45.236 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:45.236 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:45.237 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:45.237 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:45.237 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:45.237 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:28:45.237 13:40:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:45.803 13:40:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:47.704 13:40:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:52.971 13:40:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:28:52.971 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:52.971 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.971 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:52.971 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:52.971 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:52.972 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:52.972 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:52.972 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:52.972 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:52.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:28:52.972 00:28:52.972 --- 10.0.0.2 ping statistics --- 00:28:52.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.972 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:28:52.972 00:28:52.972 --- 10.0.0.1 ping statistics --- 00:28:52.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.972 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=369918 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 369918 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 369918 ']' 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.972 13:40:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:52.973 13:40:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.973 [2024-07-13 13:40:27.655889] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:52.973 [2024-07-13 13:40:27.656032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.230 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.230 [2024-07-13 13:40:27.814617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:53.488 [2024-07-13 13:40:28.064764] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.488 [2024-07-13 13:40:28.064829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.488 [2024-07-13 13:40:28.064874] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.488 [2024-07-13 13:40:28.064893] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.488 [2024-07-13 13:40:28.064928] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.488 [2024-07-13 13:40:28.065031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.488 [2024-07-13 13:40:28.068900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:53.488 [2024-07-13 13:40:28.068936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.488 [2024-07-13 13:40:28.068943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.052 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.311 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.311 13:40:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:54.311 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.311 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.311 [2024-07-13 13:40:28.954393] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.311 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.311 13:40:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:54.311 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.311 13:40:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.311 Malloc1 00:28:54.311 13:40:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.311 13:40:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:54.311 13:40:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.311 13:40:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.311 13:40:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.311 13:40:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:54.311 13:40:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.311 13:40:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.311 13:40:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.311 13:40:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:54.311 13:40:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.311 13:40:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:54.605 [2024-07-13 13:40:29.059093] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.605 13:40:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.605 13:40:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=370194 00:28:54.605 13:40:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:28:54.605 13:40:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:54.605 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.503 13:40:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:28:56.503 13:40:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.503 13:40:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:56.503 13:40:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.503 13:40:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:28:56.503 "tick_rate": 2700000000, 00:28:56.503 "poll_groups": [ 00:28:56.503 { 00:28:56.503 "name": "nvmf_tgt_poll_group_000", 00:28:56.503 "admin_qpairs": 1, 00:28:56.503 "io_qpairs": 1, 00:28:56.503 "current_admin_qpairs": 1, 00:28:56.503 "current_io_qpairs": 1, 00:28:56.503 "pending_bdev_io": 0, 00:28:56.503 "completed_nvme_io": 17571, 00:28:56.503 "transports": [ 00:28:56.503 { 00:28:56.503 "trtype": "TCP" 00:28:56.503 } 00:28:56.503 ] 00:28:56.503 }, 00:28:56.503 { 00:28:56.503 "name": "nvmf_tgt_poll_group_001", 00:28:56.503 "admin_qpairs": 0, 00:28:56.503 "io_qpairs": 1, 00:28:56.503 "current_admin_qpairs": 0, 00:28:56.503 "current_io_qpairs": 1, 00:28:56.503 "pending_bdev_io": 0, 00:28:56.503 "completed_nvme_io": 17874, 00:28:56.503 "transports": [ 00:28:56.503 { 00:28:56.503 "trtype": "TCP" 00:28:56.503 } 00:28:56.503 ] 00:28:56.503 }, 00:28:56.503 { 00:28:56.503 "name": "nvmf_tgt_poll_group_002", 00:28:56.503 "admin_qpairs": 0, 00:28:56.503 "io_qpairs": 1, 00:28:56.503 "current_admin_qpairs": 0, 00:28:56.503 "current_io_qpairs": 1, 00:28:56.503 "pending_bdev_io": 0, 00:28:56.503 "completed_nvme_io": 17315, 00:28:56.503 "transports": [ 00:28:56.503 { 00:28:56.503 "trtype": "TCP" 00:28:56.503 } 00:28:56.503 ] 00:28:56.503 }, 00:28:56.503 { 00:28:56.503 "name": "nvmf_tgt_poll_group_003", 00:28:56.503 "admin_qpairs": 0, 00:28:56.503 "io_qpairs": 1, 00:28:56.503 "current_admin_qpairs": 0, 00:28:56.503 "current_io_qpairs": 1, 00:28:56.503 "pending_bdev_io": 0, 00:28:56.503 "completed_nvme_io": 16065, 00:28:56.503 "transports": [ 00:28:56.503 { 00:28:56.503 "trtype": "TCP" 00:28:56.503 } 00:28:56.503 ] 00:28:56.503 } 00:28:56.503 ] 00:28:56.503 }' 00:28:56.503 13:40:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:56.503 13:40:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:28:56.503 13:40:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:28:56.503 13:40:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:28:56.503 13:40:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 370194 00:29:04.611 Initializing NVMe Controllers 00:29:04.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:04.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:04.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:04.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:04.611 Initialization complete. Launching workers. 00:29:04.611 ======================================================== 00:29:04.611 Latency(us) 00:29:04.611 Device Information : IOPS MiB/s Average min max 00:29:04.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9603.40 37.51 6664.40 3698.58 9994.37 00:29:04.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9707.50 37.92 6593.71 2585.27 9445.95 00:29:04.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9344.20 36.50 6849.05 2532.80 10928.80 00:29:04.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8744.10 34.16 7321.82 3908.00 9955.52 00:29:04.611 ======================================================== 00:29:04.611 Total : 37399.19 146.09 6845.89 2532.80 10928.80 00:29:04.611 00:29:04.611 13:40:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:29:04.611 13:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:04.611 13:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:04.611 13:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:04.611 13:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:04.611 13:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:04.611 13:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:04.611 rmmod nvme_tcp 00:29:04.611 rmmod nvme_fabrics 00:29:04.611 rmmod nvme_keyring 00:29:04.611 13:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:04.611 13:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:04.611 13:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:04.611 13:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 369918 ']' 00:29:04.871 13:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 369918 00:29:04.871 13:40:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 369918 ']' 00:29:04.871 13:40:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 369918 00:29:04.871 13:40:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:29:04.871 13:40:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:04.871 13:40:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 369918 00:29:04.871 13:40:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:04.871 13:40:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:04.871 13:40:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 369918' 00:29:04.871 killing process with pid 369918 00:29:04.871 13:40:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 369918 00:29:04.871 13:40:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 369918 00:29:06.242 13:40:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:06.242 13:40:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:06.242 13:40:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:06.242 13:40:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:06.242 13:40:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:06.242 13:40:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.242 13:40:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:06.242 13:40:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.145 13:40:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:08.145 13:40:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:29:08.145 13:40:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:29:09.077 13:40:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:29:10.985 13:40:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:16.259 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:16.259 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:16.259 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:16.259 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:16.259 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:16.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:16.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:29:16.260 00:29:16.260 --- 10.0.0.2 ping statistics --- 00:29:16.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.260 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:16.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:16.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:29:16.260 00:29:16.260 --- 10.0.0.1 ping statistics --- 00:29:16.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.260 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:16.260 net.core.busy_poll = 1 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:16.260 net.core.busy_read = 1 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=372940 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 372940 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 372940 ']' 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:16.260 13:40:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.260 [2024-07-13 13:40:50.820281] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:16.260 [2024-07-13 13:40:50.820430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.260 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.260 [2024-07-13 13:40:50.951215] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:16.517 [2024-07-13 13:40:51.204229] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.517 [2024-07-13 13:40:51.204301] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.517 [2024-07-13 13:40:51.204329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.517 [2024-07-13 13:40:51.204351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.517 [2024-07-13 13:40:51.204372] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.517 [2024-07-13 13:40:51.204492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.517 [2024-07-13 13:40:51.204569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.517 [2024-07-13 13:40:51.204644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.517 [2024-07-13 13:40:51.204653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.084 13:40:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:17.084 13:40:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:29:17.084 13:40:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:17.084 13:40:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:17.084 13:40:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.084 13:40:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.084 13:40:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:29:17.084 13:40:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:17.084 13:40:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:17.084 13:40:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.084 13:40:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.084 13:40:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.084 13:40:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:17.084 13:40:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:17.084 13:40:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.084 13:40:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.343 13:40:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.343 13:40:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:17.343 13:40:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.343 13:40:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.602 [2024-07-13 13:40:52.188161] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.602 Malloc1 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.602 [2024-07-13 13:40:52.292982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=373096 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:29:17.602 13:40:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:17.859 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.817 13:40:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:29:19.817 13:40:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.817 13:40:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:19.817 13:40:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.817 13:40:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:29:19.817 "tick_rate": 2700000000, 00:29:19.817 "poll_groups": [ 00:29:19.817 { 00:29:19.817 "name": "nvmf_tgt_poll_group_000", 00:29:19.817 "admin_qpairs": 1, 00:29:19.817 "io_qpairs": 1, 00:29:19.817 "current_admin_qpairs": 1, 00:29:19.817 "current_io_qpairs": 1, 00:29:19.817 "pending_bdev_io": 0, 00:29:19.817 "completed_nvme_io": 18452, 00:29:19.817 "transports": [ 00:29:19.817 { 00:29:19.817 "trtype": "TCP" 00:29:19.817 } 00:29:19.817 ] 00:29:19.817 }, 00:29:19.817 { 00:29:19.817 "name": "nvmf_tgt_poll_group_001", 00:29:19.817 "admin_qpairs": 0, 00:29:19.817 "io_qpairs": 3, 00:29:19.817 "current_admin_qpairs": 0, 00:29:19.817 "current_io_qpairs": 3, 00:29:19.817 "pending_bdev_io": 0, 00:29:19.817 "completed_nvme_io": 18599, 00:29:19.817 "transports": [ 00:29:19.817 { 00:29:19.817 "trtype": "TCP" 00:29:19.817 } 00:29:19.817 ] 00:29:19.817 }, 00:29:19.817 { 00:29:19.817 "name": "nvmf_tgt_poll_group_002", 00:29:19.817 "admin_qpairs": 0, 00:29:19.817 "io_qpairs": 0, 00:29:19.817 "current_admin_qpairs": 0, 00:29:19.817 "current_io_qpairs": 0, 00:29:19.817 "pending_bdev_io": 0, 00:29:19.817 "completed_nvme_io": 0, 00:29:19.817 "transports": [ 00:29:19.817 { 00:29:19.817 "trtype": "TCP" 00:29:19.817 } 00:29:19.817 ] 00:29:19.817 }, 00:29:19.817 { 00:29:19.817 "name": "nvmf_tgt_poll_group_003", 00:29:19.817 "admin_qpairs": 0, 00:29:19.817 "io_qpairs": 0, 00:29:19.817 "current_admin_qpairs": 0, 00:29:19.817 "current_io_qpairs": 0, 00:29:19.817 "pending_bdev_io": 0, 00:29:19.817 "completed_nvme_io": 0, 00:29:19.817 "transports": [ 00:29:19.817 { 00:29:19.817 "trtype": "TCP" 00:29:19.817 } 00:29:19.817 ] 00:29:19.817 } 00:29:19.817 ] 00:29:19.817 }' 00:29:19.817 13:40:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:19.817 13:40:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:29:19.817 13:40:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:29:19.817 13:40:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:29:19.817 13:40:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 373096 00:29:27.932 Initializing NVMe Controllers 00:29:27.932 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:27.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:27.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:27.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:27.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:27.932 Initialization complete. Launching workers. 00:29:27.932 ======================================================== 00:29:27.932 Latency(us) 00:29:27.932 Device Information : IOPS MiB/s Average min max 00:29:27.932 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3588.70 14.02 17847.30 2839.13 68037.57 00:29:27.932 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10790.60 42.15 5931.56 2027.74 9302.67 00:29:27.932 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 3694.80 14.43 17387.39 2632.20 67139.40 00:29:27.932 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3641.20 14.22 17589.71 2939.50 67165.18 00:29:27.932 ======================================================== 00:29:27.932 Total : 21715.29 84.83 11804.78 2027.74 68037.57 00:29:27.932 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:28.190 rmmod nvme_tcp 00:29:28.190 rmmod nvme_fabrics 00:29:28.190 rmmod nvme_keyring 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 372940 ']' 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 372940 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 372940 ']' 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 372940 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 372940 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 372940' 00:29:28.190 killing process with pid 372940 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 372940 00:29:28.190 13:41:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 372940 00:29:29.565 13:41:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:29.565 13:41:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:29.565 13:41:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:29.565 13:41:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:29.565 13:41:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:29.565 13:41:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.565 13:41:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:29.565 13:41:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.110 13:41:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:32.110 13:41:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:32.110 00:29:32.110 real 0m48.448s 00:29:32.110 user 2m51.844s 00:29:32.110 sys 0m10.406s 00:29:32.110 13:41:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:32.110 13:41:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:32.110 ************************************ 00:29:32.110 END TEST nvmf_perf_adq 00:29:32.110 ************************************ 00:29:32.110 13:41:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:32.110 13:41:06 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:32.110 13:41:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:32.110 13:41:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.110 13:41:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:32.110 ************************************ 00:29:32.110 START TEST nvmf_shutdown 00:29:32.110 ************************************ 00:29:32.110 13:41:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:32.110 * Looking for test storage... 00:29:32.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:32.110 13:41:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.110 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:32.110 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.110 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.110 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.110 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.110 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:32.111 ************************************ 00:29:32.111 START TEST nvmf_shutdown_tc1 00:29:32.111 ************************************ 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:32.111 13:41:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:34.014 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:34.014 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:34.014 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:34.014 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:34.014 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:34.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:29:34.015 00:29:34.015 --- 10.0.0.2 ping statistics --- 00:29:34.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.015 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:29:34.015 00:29:34.015 --- 10.0.0.1 ping statistics --- 00:29:34.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.015 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=376382 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 376382 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 376382 ']' 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:34.015 13:41:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:34.015 [2024-07-13 13:41:08.646086] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:34.015 [2024-07-13 13:41:08.646233] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.015 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.273 [2024-07-13 13:41:08.788524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.531 [2024-07-13 13:41:09.047924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.531 [2024-07-13 13:41:09.047995] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.531 [2024-07-13 13:41:09.048029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.531 [2024-07-13 13:41:09.048050] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.531 [2024-07-13 13:41:09.048070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.531 [2024-07-13 13:41:09.048204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.531 [2024-07-13 13:41:09.048318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.531 [2024-07-13 13:41:09.048366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.532 [2024-07-13 13:41:09.048372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:35.099 [2024-07-13 13:41:09.587200] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.099 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.100 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.100 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.100 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.100 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.100 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.100 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.100 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.100 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.100 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.100 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.100 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:35.100 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:35.100 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:35.100 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.100 13:41:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:35.100 Malloc1 00:29:35.100 [2024-07-13 13:41:09.713972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.100 Malloc2 00:29:35.359 Malloc3 00:29:35.359 Malloc4 00:29:35.359 Malloc5 00:29:35.619 Malloc6 00:29:35.619 Malloc7 00:29:35.879 Malloc8 00:29:35.879 Malloc9 00:29:35.879 Malloc10 00:29:36.139 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=376693 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 376693 /var/tmp/bdevperf.sock 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 376693 ']' 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:36.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.140 { 00:29:36.140 "params": { 00:29:36.140 "name": "Nvme$subsystem", 00:29:36.140 "trtype": "$TEST_TRANSPORT", 00:29:36.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.140 "adrfam": "ipv4", 00:29:36.140 "trsvcid": "$NVMF_PORT", 00:29:36.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.140 "hdgst": ${hdgst:-false}, 00:29:36.140 "ddgst": ${ddgst:-false} 00:29:36.140 }, 00:29:36.140 "method": "bdev_nvme_attach_controller" 00:29:36.140 } 00:29:36.140 EOF 00:29:36.140 )") 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.140 { 00:29:36.140 "params": { 00:29:36.140 "name": "Nvme$subsystem", 00:29:36.140 "trtype": "$TEST_TRANSPORT", 00:29:36.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.140 "adrfam": "ipv4", 00:29:36.140 "trsvcid": "$NVMF_PORT", 00:29:36.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.140 "hdgst": ${hdgst:-false}, 00:29:36.140 "ddgst": ${ddgst:-false} 00:29:36.140 }, 00:29:36.140 "method": "bdev_nvme_attach_controller" 00:29:36.140 } 00:29:36.140 EOF 00:29:36.140 )") 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.140 { 00:29:36.140 "params": { 00:29:36.140 "name": "Nvme$subsystem", 00:29:36.140 "trtype": "$TEST_TRANSPORT", 00:29:36.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.140 "adrfam": "ipv4", 00:29:36.140 "trsvcid": "$NVMF_PORT", 00:29:36.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.140 "hdgst": ${hdgst:-false}, 00:29:36.140 "ddgst": ${ddgst:-false} 00:29:36.140 }, 00:29:36.140 "method": "bdev_nvme_attach_controller" 00:29:36.140 } 00:29:36.140 EOF 00:29:36.140 )") 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.140 { 00:29:36.140 "params": { 00:29:36.140 "name": "Nvme$subsystem", 00:29:36.140 "trtype": "$TEST_TRANSPORT", 00:29:36.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.140 "adrfam": "ipv4", 00:29:36.140 "trsvcid": "$NVMF_PORT", 00:29:36.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.140 "hdgst": ${hdgst:-false}, 00:29:36.140 "ddgst": ${ddgst:-false} 00:29:36.140 }, 00:29:36.140 "method": "bdev_nvme_attach_controller" 00:29:36.140 } 00:29:36.140 EOF 00:29:36.140 )") 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.140 { 00:29:36.140 "params": { 00:29:36.140 "name": "Nvme$subsystem", 00:29:36.140 "trtype": "$TEST_TRANSPORT", 00:29:36.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.140 "adrfam": "ipv4", 00:29:36.140 "trsvcid": "$NVMF_PORT", 00:29:36.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.140 "hdgst": ${hdgst:-false}, 00:29:36.140 "ddgst": ${ddgst:-false} 00:29:36.140 }, 00:29:36.140 "method": "bdev_nvme_attach_controller" 00:29:36.140 } 00:29:36.140 EOF 00:29:36.140 )") 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.140 { 00:29:36.140 "params": { 00:29:36.140 "name": "Nvme$subsystem", 00:29:36.140 "trtype": "$TEST_TRANSPORT", 00:29:36.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.140 "adrfam": "ipv4", 00:29:36.140 "trsvcid": "$NVMF_PORT", 00:29:36.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.140 "hdgst": ${hdgst:-false}, 00:29:36.140 "ddgst": ${ddgst:-false} 00:29:36.140 }, 00:29:36.140 "method": "bdev_nvme_attach_controller" 00:29:36.140 } 00:29:36.140 EOF 00:29:36.140 )") 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.140 { 00:29:36.140 "params": { 00:29:36.140 "name": "Nvme$subsystem", 00:29:36.140 "trtype": "$TEST_TRANSPORT", 00:29:36.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.140 "adrfam": "ipv4", 00:29:36.140 "trsvcid": "$NVMF_PORT", 00:29:36.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.140 "hdgst": ${hdgst:-false}, 00:29:36.140 "ddgst": ${ddgst:-false} 00:29:36.140 }, 00:29:36.140 "method": "bdev_nvme_attach_controller" 00:29:36.140 } 00:29:36.140 EOF 00:29:36.140 )") 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.140 { 00:29:36.140 "params": { 00:29:36.140 "name": "Nvme$subsystem", 00:29:36.140 "trtype": "$TEST_TRANSPORT", 00:29:36.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.140 "adrfam": "ipv4", 00:29:36.140 "trsvcid": "$NVMF_PORT", 00:29:36.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.140 "hdgst": ${hdgst:-false}, 00:29:36.140 "ddgst": ${ddgst:-false} 00:29:36.140 }, 00:29:36.140 "method": "bdev_nvme_attach_controller" 00:29:36.140 } 00:29:36.140 EOF 00:29:36.140 )") 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.140 { 00:29:36.140 "params": { 00:29:36.140 "name": "Nvme$subsystem", 00:29:36.140 "trtype": "$TEST_TRANSPORT", 00:29:36.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.140 "adrfam": "ipv4", 00:29:36.140 "trsvcid": "$NVMF_PORT", 00:29:36.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.140 "hdgst": ${hdgst:-false}, 00:29:36.140 "ddgst": ${ddgst:-false} 00:29:36.140 }, 00:29:36.140 "method": "bdev_nvme_attach_controller" 00:29:36.140 } 00:29:36.140 EOF 00:29:36.140 )") 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.140 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.140 { 00:29:36.140 "params": { 00:29:36.140 "name": "Nvme$subsystem", 00:29:36.140 "trtype": "$TEST_TRANSPORT", 00:29:36.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.140 "adrfam": "ipv4", 00:29:36.140 "trsvcid": "$NVMF_PORT", 00:29:36.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.140 "hdgst": ${hdgst:-false}, 00:29:36.141 "ddgst": ${ddgst:-false} 00:29:36.141 }, 00:29:36.141 "method": "bdev_nvme_attach_controller" 00:29:36.141 } 00:29:36.141 EOF 00:29:36.141 )") 00:29:36.141 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.141 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:36.141 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:36.141 13:41:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:36.141 "params": { 00:29:36.141 "name": "Nvme1", 00:29:36.141 "trtype": "tcp", 00:29:36.141 "traddr": "10.0.0.2", 00:29:36.141 "adrfam": "ipv4", 00:29:36.141 "trsvcid": "4420", 00:29:36.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:36.141 "hdgst": false, 00:29:36.141 "ddgst": false 00:29:36.141 }, 00:29:36.141 "method": "bdev_nvme_attach_controller" 00:29:36.141 },{ 00:29:36.141 "params": { 00:29:36.141 "name": "Nvme2", 00:29:36.141 "trtype": "tcp", 00:29:36.141 "traddr": "10.0.0.2", 00:29:36.141 "adrfam": "ipv4", 00:29:36.141 "trsvcid": "4420", 00:29:36.141 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:36.141 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:36.141 "hdgst": false, 00:29:36.141 "ddgst": false 00:29:36.141 }, 00:29:36.141 "method": "bdev_nvme_attach_controller" 00:29:36.141 },{ 00:29:36.141 "params": { 00:29:36.141 "name": "Nvme3", 00:29:36.141 "trtype": "tcp", 00:29:36.141 "traddr": "10.0.0.2", 00:29:36.141 "adrfam": "ipv4", 00:29:36.141 "trsvcid": "4420", 00:29:36.141 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:36.141 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:36.141 "hdgst": false, 00:29:36.141 "ddgst": false 00:29:36.141 }, 00:29:36.141 "method": "bdev_nvme_attach_controller" 00:29:36.141 },{ 00:29:36.141 "params": { 00:29:36.141 "name": "Nvme4", 00:29:36.141 "trtype": "tcp", 00:29:36.141 "traddr": "10.0.0.2", 00:29:36.141 "adrfam": "ipv4", 00:29:36.141 "trsvcid": "4420", 00:29:36.141 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:36.141 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:36.141 "hdgst": false, 00:29:36.141 "ddgst": false 00:29:36.141 }, 00:29:36.141 "method": "bdev_nvme_attach_controller" 00:29:36.141 },{ 00:29:36.141 "params": { 00:29:36.141 "name": "Nvme5", 00:29:36.141 "trtype": "tcp", 00:29:36.141 "traddr": "10.0.0.2", 00:29:36.141 "adrfam": "ipv4", 00:29:36.141 "trsvcid": "4420", 00:29:36.141 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:36.141 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:36.141 "hdgst": false, 00:29:36.141 "ddgst": false 00:29:36.141 }, 00:29:36.141 "method": "bdev_nvme_attach_controller" 00:29:36.141 },{ 00:29:36.141 "params": { 00:29:36.141 "name": "Nvme6", 00:29:36.141 "trtype": "tcp", 00:29:36.141 "traddr": "10.0.0.2", 00:29:36.141 "adrfam": "ipv4", 00:29:36.141 "trsvcid": "4420", 00:29:36.141 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:36.141 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:36.141 "hdgst": false, 00:29:36.141 "ddgst": false 00:29:36.141 }, 00:29:36.141 "method": "bdev_nvme_attach_controller" 00:29:36.141 },{ 00:29:36.141 "params": { 00:29:36.141 "name": "Nvme7", 00:29:36.141 "trtype": "tcp", 00:29:36.141 "traddr": "10.0.0.2", 00:29:36.141 "adrfam": "ipv4", 00:29:36.141 "trsvcid": "4420", 00:29:36.141 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:36.141 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:36.141 "hdgst": false, 00:29:36.141 "ddgst": false 00:29:36.141 }, 00:29:36.141 "method": "bdev_nvme_attach_controller" 00:29:36.141 },{ 00:29:36.141 "params": { 00:29:36.141 "name": "Nvme8", 00:29:36.141 "trtype": "tcp", 00:29:36.141 "traddr": "10.0.0.2", 00:29:36.141 "adrfam": "ipv4", 00:29:36.141 "trsvcid": "4420", 00:29:36.141 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:36.141 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:36.141 "hdgst": false, 00:29:36.141 "ddgst": false 00:29:36.141 }, 00:29:36.141 "method": "bdev_nvme_attach_controller" 00:29:36.141 },{ 00:29:36.141 "params": { 00:29:36.141 "name": "Nvme9", 00:29:36.141 "trtype": "tcp", 00:29:36.141 "traddr": "10.0.0.2", 00:29:36.141 "adrfam": "ipv4", 00:29:36.141 "trsvcid": "4420", 00:29:36.141 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:36.141 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:36.141 "hdgst": false, 00:29:36.141 "ddgst": false 00:29:36.141 }, 00:29:36.141 "method": "bdev_nvme_attach_controller" 00:29:36.141 },{ 00:29:36.141 "params": { 00:29:36.141 "name": "Nvme10", 00:29:36.141 "trtype": "tcp", 00:29:36.141 "traddr": "10.0.0.2", 00:29:36.141 "adrfam": "ipv4", 00:29:36.141 "trsvcid": "4420", 00:29:36.141 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:36.141 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:36.141 "hdgst": false, 00:29:36.141 "ddgst": false 00:29:36.141 }, 00:29:36.141 "method": "bdev_nvme_attach_controller" 00:29:36.141 }' 00:29:36.141 [2024-07-13 13:41:10.734311] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:36.141 [2024-07-13 13:41:10.734466] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:36.141 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.141 [2024-07-13 13:41:10.863883] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.400 [2024-07-13 13:41:11.104808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.940 13:41:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:38.940 13:41:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:29:38.940 13:41:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:38.940 13:41:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.940 13:41:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:38.940 13:41:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.940 13:41:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 376693 00:29:38.940 13:41:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:29:38.940 13:41:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:29:39.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 376693 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 376382 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.877 { 00:29:39.877 "params": { 00:29:39.877 "name": "Nvme$subsystem", 00:29:39.877 "trtype": "$TEST_TRANSPORT", 00:29:39.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.877 "adrfam": "ipv4", 00:29:39.877 "trsvcid": "$NVMF_PORT", 00:29:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.877 "hdgst": ${hdgst:-false}, 00:29:39.877 "ddgst": ${ddgst:-false} 00:29:39.877 }, 00:29:39.877 "method": "bdev_nvme_attach_controller" 00:29:39.877 } 00:29:39.877 EOF 00:29:39.877 )") 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.877 { 00:29:39.877 "params": { 00:29:39.877 "name": "Nvme$subsystem", 00:29:39.877 "trtype": "$TEST_TRANSPORT", 00:29:39.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.877 "adrfam": "ipv4", 00:29:39.877 "trsvcid": "$NVMF_PORT", 00:29:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.877 "hdgst": ${hdgst:-false}, 00:29:39.877 "ddgst": ${ddgst:-false} 00:29:39.877 }, 00:29:39.877 "method": "bdev_nvme_attach_controller" 00:29:39.877 } 00:29:39.877 EOF 00:29:39.877 )") 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.877 { 00:29:39.877 "params": { 00:29:39.877 "name": "Nvme$subsystem", 00:29:39.877 "trtype": "$TEST_TRANSPORT", 00:29:39.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.877 "adrfam": "ipv4", 00:29:39.877 "trsvcid": "$NVMF_PORT", 00:29:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.877 "hdgst": ${hdgst:-false}, 00:29:39.877 "ddgst": ${ddgst:-false} 00:29:39.877 }, 00:29:39.877 "method": "bdev_nvme_attach_controller" 00:29:39.877 } 00:29:39.877 EOF 00:29:39.877 )") 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.877 { 00:29:39.877 "params": { 00:29:39.877 "name": "Nvme$subsystem", 00:29:39.877 "trtype": "$TEST_TRANSPORT", 00:29:39.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.877 "adrfam": "ipv4", 00:29:39.877 "trsvcid": "$NVMF_PORT", 00:29:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.877 "hdgst": ${hdgst:-false}, 00:29:39.877 "ddgst": ${ddgst:-false} 00:29:39.877 }, 00:29:39.877 "method": "bdev_nvme_attach_controller" 00:29:39.877 } 00:29:39.877 EOF 00:29:39.877 )") 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.877 { 00:29:39.877 "params": { 00:29:39.877 "name": "Nvme$subsystem", 00:29:39.877 "trtype": "$TEST_TRANSPORT", 00:29:39.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.877 "adrfam": "ipv4", 00:29:39.877 "trsvcid": "$NVMF_PORT", 00:29:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.877 "hdgst": ${hdgst:-false}, 00:29:39.877 "ddgst": ${ddgst:-false} 00:29:39.877 }, 00:29:39.877 "method": "bdev_nvme_attach_controller" 00:29:39.877 } 00:29:39.877 EOF 00:29:39.877 )") 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.877 { 00:29:39.877 "params": { 00:29:39.877 "name": "Nvme$subsystem", 00:29:39.877 "trtype": "$TEST_TRANSPORT", 00:29:39.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.877 "adrfam": "ipv4", 00:29:39.877 "trsvcid": "$NVMF_PORT", 00:29:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.877 "hdgst": ${hdgst:-false}, 00:29:39.877 "ddgst": ${ddgst:-false} 00:29:39.877 }, 00:29:39.877 "method": "bdev_nvme_attach_controller" 00:29:39.877 } 00:29:39.877 EOF 00:29:39.877 )") 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.877 { 00:29:39.877 "params": { 00:29:39.877 "name": "Nvme$subsystem", 00:29:39.877 "trtype": "$TEST_TRANSPORT", 00:29:39.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.877 "adrfam": "ipv4", 00:29:39.877 "trsvcid": "$NVMF_PORT", 00:29:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.877 "hdgst": ${hdgst:-false}, 00:29:39.877 "ddgst": ${ddgst:-false} 00:29:39.877 }, 00:29:39.877 "method": "bdev_nvme_attach_controller" 00:29:39.877 } 00:29:39.877 EOF 00:29:39.877 )") 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.877 { 00:29:39.877 "params": { 00:29:39.877 "name": "Nvme$subsystem", 00:29:39.877 "trtype": "$TEST_TRANSPORT", 00:29:39.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.877 "adrfam": "ipv4", 00:29:39.877 "trsvcid": "$NVMF_PORT", 00:29:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.877 "hdgst": ${hdgst:-false}, 00:29:39.877 "ddgst": ${ddgst:-false} 00:29:39.877 }, 00:29:39.877 "method": "bdev_nvme_attach_controller" 00:29:39.877 } 00:29:39.877 EOF 00:29:39.877 )") 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.877 { 00:29:39.877 "params": { 00:29:39.877 "name": "Nvme$subsystem", 00:29:39.877 "trtype": "$TEST_TRANSPORT", 00:29:39.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.877 "adrfam": "ipv4", 00:29:39.877 "trsvcid": "$NVMF_PORT", 00:29:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.877 "hdgst": ${hdgst:-false}, 00:29:39.877 "ddgst": ${ddgst:-false} 00:29:39.877 }, 00:29:39.877 "method": "bdev_nvme_attach_controller" 00:29:39.877 } 00:29:39.877 EOF 00:29:39.877 )") 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.877 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.877 { 00:29:39.877 "params": { 00:29:39.877 "name": "Nvme$subsystem", 00:29:39.877 "trtype": "$TEST_TRANSPORT", 00:29:39.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.877 "adrfam": "ipv4", 00:29:39.877 "trsvcid": "$NVMF_PORT", 00:29:39.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.877 "hdgst": ${hdgst:-false}, 00:29:39.877 "ddgst": ${ddgst:-false} 00:29:39.877 }, 00:29:39.877 "method": "bdev_nvme_attach_controller" 00:29:39.877 } 00:29:39.877 EOF 00:29:39.877 )") 00:29:39.878 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.878 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:39.878 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:39.878 13:41:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:39.878 "params": { 00:29:39.878 "name": "Nvme1", 00:29:39.878 "trtype": "tcp", 00:29:39.878 "traddr": "10.0.0.2", 00:29:39.878 "adrfam": "ipv4", 00:29:39.878 "trsvcid": "4420", 00:29:39.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:39.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:39.878 "hdgst": false, 00:29:39.878 "ddgst": false 00:29:39.878 }, 00:29:39.878 "method": "bdev_nvme_attach_controller" 00:29:39.878 },{ 00:29:39.878 "params": { 00:29:39.878 "name": "Nvme2", 00:29:39.878 "trtype": "tcp", 00:29:39.878 "traddr": "10.0.0.2", 00:29:39.878 "adrfam": "ipv4", 00:29:39.878 "trsvcid": "4420", 00:29:39.878 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:39.878 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:39.878 "hdgst": false, 00:29:39.878 "ddgst": false 00:29:39.878 }, 00:29:39.878 "method": "bdev_nvme_attach_controller" 00:29:39.878 },{ 00:29:39.878 "params": { 00:29:39.878 "name": "Nvme3", 00:29:39.878 "trtype": "tcp", 00:29:39.878 "traddr": "10.0.0.2", 00:29:39.878 "adrfam": "ipv4", 00:29:39.878 "trsvcid": "4420", 00:29:39.878 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:39.878 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:39.878 "hdgst": false, 00:29:39.878 "ddgst": false 00:29:39.878 }, 00:29:39.878 "method": "bdev_nvme_attach_controller" 00:29:39.878 },{ 00:29:39.878 "params": { 00:29:39.878 "name": "Nvme4", 00:29:39.878 "trtype": "tcp", 00:29:39.878 "traddr": "10.0.0.2", 00:29:39.878 "adrfam": "ipv4", 00:29:39.878 "trsvcid": "4420", 00:29:39.878 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:39.878 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:39.878 "hdgst": false, 00:29:39.878 "ddgst": false 00:29:39.878 }, 00:29:39.878 "method": "bdev_nvme_attach_controller" 00:29:39.878 },{ 00:29:39.878 "params": { 00:29:39.878 "name": "Nvme5", 00:29:39.878 "trtype": "tcp", 00:29:39.878 "traddr": "10.0.0.2", 00:29:39.878 "adrfam": "ipv4", 00:29:39.878 "trsvcid": "4420", 00:29:39.878 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:39.878 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:39.878 "hdgst": false, 00:29:39.878 "ddgst": false 00:29:39.878 }, 00:29:39.878 "method": "bdev_nvme_attach_controller" 00:29:39.878 },{ 00:29:39.878 "params": { 00:29:39.878 "name": "Nvme6", 00:29:39.878 "trtype": "tcp", 00:29:39.878 "traddr": "10.0.0.2", 00:29:39.878 "adrfam": "ipv4", 00:29:39.878 "trsvcid": "4420", 00:29:39.878 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:39.878 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:39.878 "hdgst": false, 00:29:39.878 "ddgst": false 00:29:39.878 }, 00:29:39.878 "method": "bdev_nvme_attach_controller" 00:29:39.878 },{ 00:29:39.878 "params": { 00:29:39.878 "name": "Nvme7", 00:29:39.878 "trtype": "tcp", 00:29:39.878 "traddr": "10.0.0.2", 00:29:39.878 "adrfam": "ipv4", 00:29:39.878 "trsvcid": "4420", 00:29:39.878 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:39.878 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:39.878 "hdgst": false, 00:29:39.878 "ddgst": false 00:29:39.878 }, 00:29:39.878 "method": "bdev_nvme_attach_controller" 00:29:39.878 },{ 00:29:39.878 "params": { 00:29:39.878 "name": "Nvme8", 00:29:39.878 "trtype": "tcp", 00:29:39.878 "traddr": "10.0.0.2", 00:29:39.878 "adrfam": "ipv4", 00:29:39.878 "trsvcid": "4420", 00:29:39.878 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:39.878 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:39.878 "hdgst": false, 00:29:39.878 "ddgst": false 00:29:39.878 }, 00:29:39.878 "method": "bdev_nvme_attach_controller" 00:29:39.878 },{ 00:29:39.878 "params": { 00:29:39.878 "name": "Nvme9", 00:29:39.878 "trtype": "tcp", 00:29:39.878 "traddr": "10.0.0.2", 00:29:39.878 "adrfam": "ipv4", 00:29:39.878 "trsvcid": "4420", 00:29:39.878 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:39.878 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:39.878 "hdgst": false, 00:29:39.878 "ddgst": false 00:29:39.878 }, 00:29:39.878 "method": "bdev_nvme_attach_controller" 00:29:39.878 },{ 00:29:39.878 "params": { 00:29:39.878 "name": "Nvme10", 00:29:39.878 "trtype": "tcp", 00:29:39.878 "traddr": "10.0.0.2", 00:29:39.878 "adrfam": "ipv4", 00:29:39.878 "trsvcid": "4420", 00:29:39.878 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:39.878 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:39.878 "hdgst": false, 00:29:39.878 "ddgst": false 00:29:39.878 }, 00:29:39.878 "method": "bdev_nvme_attach_controller" 00:29:39.878 }' 00:29:39.878 [2024-07-13 13:41:14.494731] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:39.878 [2024-07-13 13:41:14.494923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid377237 ] 00:29:39.878 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.136 [2024-07-13 13:41:14.623522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.136 [2024-07-13 13:41:14.860253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.036 Running I/O for 1 seconds... 00:29:43.440 00:29:43.440 Latency(us) 00:29:43.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.440 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.440 Verification LBA range: start 0x0 length 0x400 00:29:43.440 Nvme1n1 : 1.14 168.74 10.55 0.00 0.00 375249.22 25631.86 312242.63 00:29:43.440 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.440 Verification LBA range: start 0x0 length 0x400 00:29:43.440 Nvme2n1 : 1.13 169.83 10.61 0.00 0.00 366156.10 25049.32 310689.19 00:29:43.440 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.440 Verification LBA range: start 0x0 length 0x400 00:29:43.440 Nvme3n1 : 1.15 225.61 14.10 0.00 0.00 268887.86 10388.67 253211.69 00:29:43.440 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.440 Verification LBA range: start 0x0 length 0x400 00:29:43.440 Nvme4n1 : 1.18 216.44 13.53 0.00 0.00 277605.45 21068.61 296708.17 00:29:43.440 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.440 Verification LBA range: start 0x0 length 0x400 00:29:43.440 Nvme5n1 : 1.15 167.17 10.45 0.00 0.00 351982.43 30098.01 316902.97 00:29:43.440 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.440 Verification LBA range: start 0x0 length 0x400 00:29:43.440 Nvme6n1 : 1.20 213.28 13.33 0.00 0.00 271930.41 26214.40 318456.41 00:29:43.440 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.440 Verification LBA range: start 0x0 length 0x400 00:29:43.440 Nvme7n1 : 1.21 212.30 13.27 0.00 0.00 268433.07 22330.79 313796.08 00:29:43.440 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.440 Verification LBA range: start 0x0 length 0x400 00:29:43.440 Nvme8n1 : 1.21 210.87 13.18 0.00 0.00 265668.65 21262.79 312242.63 00:29:43.440 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.440 Verification LBA range: start 0x0 length 0x400 00:29:43.440 Nvme9n1 : 1.19 161.33 10.08 0.00 0.00 339995.31 27573.67 344865.00 00:29:43.440 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.440 Verification LBA range: start 0x0 length 0x400 00:29:43.440 Nvme10n1 : 1.23 207.78 12.99 0.00 0.00 260337.59 12621.75 351078.78 00:29:43.440 =================================================================================================================== 00:29:43.440 Total : 1953.35 122.08 0.00 0.00 298604.00 10388.67 351078.78 00:29:44.377 13:41:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:29:44.377 13:41:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:44.377 13:41:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:44.377 13:41:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:44.377 13:41:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:44.377 13:41:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:44.377 13:41:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:29:44.377 13:41:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:44.377 13:41:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:29:44.377 13:41:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:44.377 13:41:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:44.377 rmmod nvme_tcp 00:29:44.377 rmmod nvme_fabrics 00:29:44.377 rmmod nvme_keyring 00:29:44.377 13:41:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:44.377 13:41:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:29:44.377 13:41:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:29:44.377 13:41:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 376382 ']' 00:29:44.377 13:41:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 376382 00:29:44.377 13:41:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 376382 ']' 00:29:44.377 13:41:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 376382 00:29:44.377 13:41:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:29:44.377 13:41:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:44.377 13:41:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 376382 00:29:44.377 13:41:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:44.377 13:41:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:44.377 13:41:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 376382' 00:29:44.377 killing process with pid 376382 00:29:44.377 13:41:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 376382 00:29:44.377 13:41:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 376382 00:29:47.664 13:41:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:47.664 13:41:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:47.664 13:41:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:47.664 13:41:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:47.664 13:41:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:47.664 13:41:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.664 13:41:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:47.664 13:41:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.565 13:41:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:49.565 00:29:49.565 real 0m17.575s 00:29:49.565 user 0m56.505s 00:29:49.565 sys 0m3.947s 00:29:49.565 13:41:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:49.565 13:41:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:49.565 ************************************ 00:29:49.565 END TEST nvmf_shutdown_tc1 00:29:49.565 ************************************ 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:49.565 ************************************ 00:29:49.565 START TEST nvmf_shutdown_tc2 00:29:49.565 ************************************ 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:49.565 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:49.566 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:49.566 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:49.566 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:49.566 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:49.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:49.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:29:49.566 00:29:49.566 --- 10.0.0.2 ping statistics --- 00:29:49.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.566 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:49.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:49.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:29:49.566 00:29:49.566 --- 10.0.0.1 ping statistics --- 00:29:49.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.566 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=378403 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 378403 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 378403 ']' 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:49.566 13:41:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.824 [2024-07-13 13:41:24.316237] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:49.824 [2024-07-13 13:41:24.316394] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.824 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.824 [2024-07-13 13:41:24.457319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:50.082 [2024-07-13 13:41:24.773666] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.082 [2024-07-13 13:41:24.773749] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.082 [2024-07-13 13:41:24.773783] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.082 [2024-07-13 13:41:24.773821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.082 [2024-07-13 13:41:24.773881] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.082 [2024-07-13 13:41:24.774024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.082 [2024-07-13 13:41:24.774103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.082 [2024-07-13 13:41:24.774166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.082 [2024-07-13 13:41:24.774186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.647 [2024-07-13 13:41:25.298005] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.647 13:41:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.904 Malloc1 00:29:50.904 [2024-07-13 13:41:25.466600] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.904 Malloc2 00:29:51.162 Malloc3 00:29:51.162 Malloc4 00:29:51.419 Malloc5 00:29:51.419 Malloc6 00:29:51.677 Malloc7 00:29:51.677 Malloc8 00:29:51.936 Malloc9 00:29:51.936 Malloc10 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=378715 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 378715 /var/tmp/bdevperf.sock 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 378715 ']' 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:51.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.936 { 00:29:51.936 "params": { 00:29:51.936 "name": "Nvme$subsystem", 00:29:51.936 "trtype": "$TEST_TRANSPORT", 00:29:51.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.936 "adrfam": "ipv4", 00:29:51.936 "trsvcid": "$NVMF_PORT", 00:29:51.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.936 "hdgst": ${hdgst:-false}, 00:29:51.936 "ddgst": ${ddgst:-false} 00:29:51.936 }, 00:29:51.936 "method": "bdev_nvme_attach_controller" 00:29:51.936 } 00:29:51.936 EOF 00:29:51.936 )") 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.936 { 00:29:51.936 "params": { 00:29:51.936 "name": "Nvme$subsystem", 00:29:51.936 "trtype": "$TEST_TRANSPORT", 00:29:51.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.936 "adrfam": "ipv4", 00:29:51.936 "trsvcid": "$NVMF_PORT", 00:29:51.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.936 "hdgst": ${hdgst:-false}, 00:29:51.936 "ddgst": ${ddgst:-false} 00:29:51.936 }, 00:29:51.936 "method": "bdev_nvme_attach_controller" 00:29:51.936 } 00:29:51.936 EOF 00:29:51.936 )") 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.936 { 00:29:51.936 "params": { 00:29:51.936 "name": "Nvme$subsystem", 00:29:51.936 "trtype": "$TEST_TRANSPORT", 00:29:51.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.936 "adrfam": "ipv4", 00:29:51.936 "trsvcid": "$NVMF_PORT", 00:29:51.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.936 "hdgst": ${hdgst:-false}, 00:29:51.936 "ddgst": ${ddgst:-false} 00:29:51.936 }, 00:29:51.936 "method": "bdev_nvme_attach_controller" 00:29:51.936 } 00:29:51.936 EOF 00:29:51.936 )") 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.936 { 00:29:51.936 "params": { 00:29:51.936 "name": "Nvme$subsystem", 00:29:51.936 "trtype": "$TEST_TRANSPORT", 00:29:51.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.936 "adrfam": "ipv4", 00:29:51.936 "trsvcid": "$NVMF_PORT", 00:29:51.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.936 "hdgst": ${hdgst:-false}, 00:29:51.936 "ddgst": ${ddgst:-false} 00:29:51.936 }, 00:29:51.936 "method": "bdev_nvme_attach_controller" 00:29:51.936 } 00:29:51.936 EOF 00:29:51.936 )") 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.936 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.936 { 00:29:51.936 "params": { 00:29:51.936 "name": "Nvme$subsystem", 00:29:51.936 "trtype": "$TEST_TRANSPORT", 00:29:51.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.936 "adrfam": "ipv4", 00:29:51.936 "trsvcid": "$NVMF_PORT", 00:29:51.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.936 "hdgst": ${hdgst:-false}, 00:29:51.936 "ddgst": ${ddgst:-false} 00:29:51.936 }, 00:29:51.936 "method": "bdev_nvme_attach_controller" 00:29:51.936 } 00:29:51.937 EOF 00:29:51.937 )") 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.937 { 00:29:51.937 "params": { 00:29:51.937 "name": "Nvme$subsystem", 00:29:51.937 "trtype": "$TEST_TRANSPORT", 00:29:51.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.937 "adrfam": "ipv4", 00:29:51.937 "trsvcid": "$NVMF_PORT", 00:29:51.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.937 "hdgst": ${hdgst:-false}, 00:29:51.937 "ddgst": ${ddgst:-false} 00:29:51.937 }, 00:29:51.937 "method": "bdev_nvme_attach_controller" 00:29:51.937 } 00:29:51.937 EOF 00:29:51.937 )") 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.937 { 00:29:51.937 "params": { 00:29:51.937 "name": "Nvme$subsystem", 00:29:51.937 "trtype": "$TEST_TRANSPORT", 00:29:51.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.937 "adrfam": "ipv4", 00:29:51.937 "trsvcid": "$NVMF_PORT", 00:29:51.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.937 "hdgst": ${hdgst:-false}, 00:29:51.937 "ddgst": ${ddgst:-false} 00:29:51.937 }, 00:29:51.937 "method": "bdev_nvme_attach_controller" 00:29:51.937 } 00:29:51.937 EOF 00:29:51.937 )") 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.937 { 00:29:51.937 "params": { 00:29:51.937 "name": "Nvme$subsystem", 00:29:51.937 "trtype": "$TEST_TRANSPORT", 00:29:51.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.937 "adrfam": "ipv4", 00:29:51.937 "trsvcid": "$NVMF_PORT", 00:29:51.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.937 "hdgst": ${hdgst:-false}, 00:29:51.937 "ddgst": ${ddgst:-false} 00:29:51.937 }, 00:29:51.937 "method": "bdev_nvme_attach_controller" 00:29:51.937 } 00:29:51.937 EOF 00:29:51.937 )") 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.937 { 00:29:51.937 "params": { 00:29:51.937 "name": "Nvme$subsystem", 00:29:51.937 "trtype": "$TEST_TRANSPORT", 00:29:51.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.937 "adrfam": "ipv4", 00:29:51.937 "trsvcid": "$NVMF_PORT", 00:29:51.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.937 "hdgst": ${hdgst:-false}, 00:29:51.937 "ddgst": ${ddgst:-false} 00:29:51.937 }, 00:29:51.937 "method": "bdev_nvme_attach_controller" 00:29:51.937 } 00:29:51.937 EOF 00:29:51.937 )") 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.937 { 00:29:51.937 "params": { 00:29:51.937 "name": "Nvme$subsystem", 00:29:51.937 "trtype": "$TEST_TRANSPORT", 00:29:51.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.937 "adrfam": "ipv4", 00:29:51.937 "trsvcid": "$NVMF_PORT", 00:29:51.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.937 "hdgst": ${hdgst:-false}, 00:29:51.937 "ddgst": ${ddgst:-false} 00:29:51.937 }, 00:29:51.937 "method": "bdev_nvme_attach_controller" 00:29:51.937 } 00:29:51.937 EOF 00:29:51.937 )") 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:29:51.937 13:41:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:51.937 "params": { 00:29:51.937 "name": "Nvme1", 00:29:51.937 "trtype": "tcp", 00:29:51.937 "traddr": "10.0.0.2", 00:29:51.937 "adrfam": "ipv4", 00:29:51.937 "trsvcid": "4420", 00:29:51.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:51.937 "hdgst": false, 00:29:51.937 "ddgst": false 00:29:51.937 }, 00:29:51.937 "method": "bdev_nvme_attach_controller" 00:29:51.937 },{ 00:29:51.937 "params": { 00:29:51.937 "name": "Nvme2", 00:29:51.937 "trtype": "tcp", 00:29:51.937 "traddr": "10.0.0.2", 00:29:51.937 "adrfam": "ipv4", 00:29:51.937 "trsvcid": "4420", 00:29:51.937 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:51.937 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:51.937 "hdgst": false, 00:29:51.937 "ddgst": false 00:29:51.937 }, 00:29:51.937 "method": "bdev_nvme_attach_controller" 00:29:51.937 },{ 00:29:51.937 "params": { 00:29:51.937 "name": "Nvme3", 00:29:51.937 "trtype": "tcp", 00:29:51.937 "traddr": "10.0.0.2", 00:29:51.937 "adrfam": "ipv4", 00:29:51.937 "trsvcid": "4420", 00:29:51.937 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:51.937 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:51.937 "hdgst": false, 00:29:51.937 "ddgst": false 00:29:51.937 }, 00:29:51.937 "method": "bdev_nvme_attach_controller" 00:29:51.937 },{ 00:29:51.937 "params": { 00:29:51.937 "name": "Nvme4", 00:29:51.937 "trtype": "tcp", 00:29:51.937 "traddr": "10.0.0.2", 00:29:51.937 "adrfam": "ipv4", 00:29:51.937 "trsvcid": "4420", 00:29:51.937 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:51.937 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:51.937 "hdgst": false, 00:29:51.937 "ddgst": false 00:29:51.937 }, 00:29:51.937 "method": "bdev_nvme_attach_controller" 00:29:51.937 },{ 00:29:51.937 "params": { 00:29:51.937 "name": "Nvme5", 00:29:51.937 "trtype": "tcp", 00:29:51.937 "traddr": "10.0.0.2", 00:29:51.937 "adrfam": "ipv4", 00:29:51.937 "trsvcid": "4420", 00:29:51.937 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:51.937 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:51.938 "hdgst": false, 00:29:51.938 "ddgst": false 00:29:51.938 }, 00:29:51.938 "method": "bdev_nvme_attach_controller" 00:29:51.938 },{ 00:29:51.938 "params": { 00:29:51.938 "name": "Nvme6", 00:29:51.938 "trtype": "tcp", 00:29:51.938 "traddr": "10.0.0.2", 00:29:51.938 "adrfam": "ipv4", 00:29:51.938 "trsvcid": "4420", 00:29:51.938 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:51.938 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:51.938 "hdgst": false, 00:29:51.938 "ddgst": false 00:29:51.938 }, 00:29:51.938 "method": "bdev_nvme_attach_controller" 00:29:51.938 },{ 00:29:51.938 "params": { 00:29:51.938 "name": "Nvme7", 00:29:51.938 "trtype": "tcp", 00:29:51.938 "traddr": "10.0.0.2", 00:29:51.938 "adrfam": "ipv4", 00:29:51.938 "trsvcid": "4420", 00:29:51.938 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:51.938 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:51.938 "hdgst": false, 00:29:51.938 "ddgst": false 00:29:51.938 }, 00:29:51.938 "method": "bdev_nvme_attach_controller" 00:29:51.938 },{ 00:29:51.938 "params": { 00:29:51.938 "name": "Nvme8", 00:29:51.938 "trtype": "tcp", 00:29:51.938 "traddr": "10.0.0.2", 00:29:51.938 "adrfam": "ipv4", 00:29:51.938 "trsvcid": "4420", 00:29:51.938 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:51.938 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:51.938 "hdgst": false, 00:29:51.938 "ddgst": false 00:29:51.938 }, 00:29:51.938 "method": "bdev_nvme_attach_controller" 00:29:51.938 },{ 00:29:51.938 "params": { 00:29:51.938 "name": "Nvme9", 00:29:51.938 "trtype": "tcp", 00:29:51.938 "traddr": "10.0.0.2", 00:29:51.938 "adrfam": "ipv4", 00:29:51.938 "trsvcid": "4420", 00:29:51.938 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:51.938 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:51.938 "hdgst": false, 00:29:51.938 "ddgst": false 00:29:51.938 }, 00:29:51.938 "method": "bdev_nvme_attach_controller" 00:29:51.938 },{ 00:29:51.938 "params": { 00:29:51.938 "name": "Nvme10", 00:29:51.938 "trtype": "tcp", 00:29:51.938 "traddr": "10.0.0.2", 00:29:51.938 "adrfam": "ipv4", 00:29:51.938 "trsvcid": "4420", 00:29:51.938 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:51.938 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:51.938 "hdgst": false, 00:29:51.938 "ddgst": false 00:29:51.938 }, 00:29:51.938 "method": "bdev_nvme_attach_controller" 00:29:51.938 }' 00:29:51.938 [2024-07-13 13:41:26.666500] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:51.938 [2024-07-13 13:41:26.666653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid378715 ] 00:29:52.196 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.196 [2024-07-13 13:41:26.795786] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.453 [2024-07-13 13:41:27.035676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.350 Running I/O for 10 seconds... 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=72 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 72 -ge 100 ']' 00:29:54.915 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=136 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 378715 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 378715 ']' 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 378715 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 378715 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 378715' 00:29:55.172 killing process with pid 378715 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 378715 00:29:55.172 13:41:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 378715 00:29:55.172 Received shutdown signal, test time was about 1.040060 seconds 00:29:55.172 00:29:55.172 Latency(us) 00:29:55.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.172 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.172 Verification LBA range: start 0x0 length 0x400 00:29:55.172 Nvme1n1 : 0.98 201.12 12.57 0.00 0.00 311843.30 10437.21 304475.40 00:29:55.173 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.173 Verification LBA range: start 0x0 length 0x400 00:29:55.173 Nvme2n1 : 1.00 192.91 12.06 0.00 0.00 320736.52 25631.86 304475.40 00:29:55.173 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.173 Verification LBA range: start 0x0 length 0x400 00:29:55.173 Nvme3n1 : 0.96 198.99 12.44 0.00 0.00 304324.96 24563.86 310689.19 00:29:55.173 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.173 Verification LBA range: start 0x0 length 0x400 00:29:55.173 Nvme4n1 : 1.00 191.60 11.98 0.00 0.00 309884.14 24272.59 284280.60 00:29:55.173 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.173 Verification LBA range: start 0x0 length 0x400 00:29:55.173 Nvme5n1 : 1.01 189.98 11.87 0.00 0.00 306075.88 26991.12 335544.32 00:29:55.173 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.173 Verification LBA range: start 0x0 length 0x400 00:29:55.173 Nvme6n1 : 1.02 188.70 11.79 0.00 0.00 301945.49 27185.30 307582.29 00:29:55.173 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.173 Verification LBA range: start 0x0 length 0x400 00:29:55.173 Nvme7n1 : 0.98 195.13 12.20 0.00 0.00 283962.03 27185.30 312242.63 00:29:55.173 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.173 Verification LBA range: start 0x0 length 0x400 00:29:55.173 Nvme8n1 : 1.03 186.58 11.66 0.00 0.00 292443.02 24563.86 337097.77 00:29:55.173 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.173 Verification LBA range: start 0x0 length 0x400 00:29:55.173 Nvme9n1 : 1.04 184.75 11.55 0.00 0.00 289512.87 24855.13 355739.12 00:29:55.173 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:55.173 Verification LBA range: start 0x0 length 0x400 00:29:55.173 Nvme10n1 : 1.03 187.20 11.70 0.00 0.00 278489.06 25631.86 306028.85 00:29:55.173 =================================================================================================================== 00:29:55.173 Total : 1916.98 119.81 0.00 0.00 299952.69 10437.21 355739.12 00:29:56.542 13:41:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 378403 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:57.475 rmmod nvme_tcp 00:29:57.475 rmmod nvme_fabrics 00:29:57.475 rmmod nvme_keyring 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 378403 ']' 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 378403 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 378403 ']' 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 378403 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 378403 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 378403' 00:29:57.475 killing process with pid 378403 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 378403 00:29:57.475 13:41:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 378403 00:30:00.752 13:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:00.752 13:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:00.752 13:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:00.752 13:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:00.752 13:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:00.753 13:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.753 13:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:00.753 13:41:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:02.681 00:30:02.681 real 0m13.011s 00:30:02.681 user 0m42.744s 00:30:02.681 sys 0m2.263s 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:02.681 ************************************ 00:30:02.681 END TEST nvmf_shutdown_tc2 00:30:02.681 ************************************ 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:02.681 ************************************ 00:30:02.681 START TEST nvmf_shutdown_tc3 00:30:02.681 ************************************ 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:02.681 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:02.682 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:02.682 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:02.682 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:02.682 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:02.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:30:02.682 00:30:02.682 --- 10.0.0.2 ping statistics --- 00:30:02.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.682 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:30:02.682 00:30:02.682 --- 10.0.0.1 ping statistics --- 00:30:02.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.682 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=380138 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 380138 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 380138 ']' 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:02.682 13:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:02.682 [2024-07-13 13:41:37.387697] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:02.682 [2024-07-13 13:41:37.387834] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.941 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.941 [2024-07-13 13:41:37.554951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:03.199 [2024-07-13 13:41:37.803657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:03.199 [2024-07-13 13:41:37.803723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:03.199 [2024-07-13 13:41:37.803745] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:03.199 [2024-07-13 13:41:37.803762] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:03.199 [2024-07-13 13:41:37.803779] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:03.199 [2024-07-13 13:41:37.803917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:03.199 [2024-07-13 13:41:37.803988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:03.199 [2024-07-13 13:41:37.804028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.199 [2024-07-13 13:41:37.804040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:03.765 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:03.765 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:30:03.765 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:03.765 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:03.765 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:03.765 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.765 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:03.765 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.765 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:03.765 [2024-07-13 13:41:38.380551] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.765 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.765 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:03.765 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:03.765 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:03.765 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.766 13:41:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:03.766 Malloc1 00:30:04.024 [2024-07-13 13:41:38.523971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.024 Malloc2 00:30:04.024 Malloc3 00:30:04.281 Malloc4 00:30:04.281 Malloc5 00:30:04.281 Malloc6 00:30:04.539 Malloc7 00:30:04.539 Malloc8 00:30:04.822 Malloc9 00:30:04.822 Malloc10 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=380449 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 380449 /var/tmp/bdevperf.sock 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 380449 ']' 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:30:04.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:04.822 { 00:30:04.822 "params": { 00:30:04.822 "name": "Nvme$subsystem", 00:30:04.822 "trtype": "$TEST_TRANSPORT", 00:30:04.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.822 "adrfam": "ipv4", 00:30:04.822 "trsvcid": "$NVMF_PORT", 00:30:04.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.822 "hdgst": ${hdgst:-false}, 00:30:04.822 "ddgst": ${ddgst:-false} 00:30:04.822 }, 00:30:04.822 "method": "bdev_nvme_attach_controller" 00:30:04.822 } 00:30:04.822 EOF 00:30:04.822 )") 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:04.822 { 00:30:04.822 "params": { 00:30:04.822 "name": "Nvme$subsystem", 00:30:04.822 "trtype": "$TEST_TRANSPORT", 00:30:04.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.822 "adrfam": "ipv4", 00:30:04.822 "trsvcid": "$NVMF_PORT", 00:30:04.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.822 "hdgst": ${hdgst:-false}, 00:30:04.822 "ddgst": ${ddgst:-false} 00:30:04.822 }, 00:30:04.822 "method": "bdev_nvme_attach_controller" 00:30:04.822 } 00:30:04.822 EOF 00:30:04.822 )") 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:04.822 { 00:30:04.822 "params": { 00:30:04.822 "name": "Nvme$subsystem", 00:30:04.822 "trtype": "$TEST_TRANSPORT", 00:30:04.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.822 "adrfam": "ipv4", 00:30:04.822 "trsvcid": "$NVMF_PORT", 00:30:04.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.822 "hdgst": ${hdgst:-false}, 00:30:04.822 "ddgst": ${ddgst:-false} 00:30:04.822 }, 00:30:04.822 "method": "bdev_nvme_attach_controller" 00:30:04.822 } 00:30:04.822 EOF 00:30:04.822 )") 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:04.822 { 00:30:04.822 "params": { 00:30:04.822 "name": "Nvme$subsystem", 00:30:04.822 "trtype": "$TEST_TRANSPORT", 00:30:04.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.822 "adrfam": "ipv4", 00:30:04.822 "trsvcid": "$NVMF_PORT", 00:30:04.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.822 "hdgst": ${hdgst:-false}, 00:30:04.822 "ddgst": ${ddgst:-false} 00:30:04.822 }, 00:30:04.822 "method": "bdev_nvme_attach_controller" 00:30:04.822 } 00:30:04.822 EOF 00:30:04.822 )") 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:04.822 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:04.822 { 00:30:04.822 "params": { 00:30:04.822 "name": "Nvme$subsystem", 00:30:04.822 "trtype": "$TEST_TRANSPORT", 00:30:04.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.822 "adrfam": "ipv4", 00:30:04.822 "trsvcid": "$NVMF_PORT", 00:30:04.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.822 "hdgst": ${hdgst:-false}, 00:30:04.823 "ddgst": ${ddgst:-false} 00:30:04.823 }, 00:30:04.823 "method": "bdev_nvme_attach_controller" 00:30:04.823 } 00:30:04.823 EOF 00:30:04.823 )") 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:04.823 { 00:30:04.823 "params": { 00:30:04.823 "name": "Nvme$subsystem", 00:30:04.823 "trtype": "$TEST_TRANSPORT", 00:30:04.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.823 "adrfam": "ipv4", 00:30:04.823 "trsvcid": "$NVMF_PORT", 00:30:04.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.823 "hdgst": ${hdgst:-false}, 00:30:04.823 "ddgst": ${ddgst:-false} 00:30:04.823 }, 00:30:04.823 "method": "bdev_nvme_attach_controller" 00:30:04.823 } 00:30:04.823 EOF 00:30:04.823 )") 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:04.823 { 00:30:04.823 "params": { 00:30:04.823 "name": "Nvme$subsystem", 00:30:04.823 "trtype": "$TEST_TRANSPORT", 00:30:04.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.823 "adrfam": "ipv4", 00:30:04.823 "trsvcid": "$NVMF_PORT", 00:30:04.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.823 "hdgst": ${hdgst:-false}, 00:30:04.823 "ddgst": ${ddgst:-false} 00:30:04.823 }, 00:30:04.823 "method": "bdev_nvme_attach_controller" 00:30:04.823 } 00:30:04.823 EOF 00:30:04.823 )") 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:04.823 { 00:30:04.823 "params": { 00:30:04.823 "name": "Nvme$subsystem", 00:30:04.823 "trtype": "$TEST_TRANSPORT", 00:30:04.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.823 "adrfam": "ipv4", 00:30:04.823 "trsvcid": "$NVMF_PORT", 00:30:04.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.823 "hdgst": ${hdgst:-false}, 00:30:04.823 "ddgst": ${ddgst:-false} 00:30:04.823 }, 00:30:04.823 "method": "bdev_nvme_attach_controller" 00:30:04.823 } 00:30:04.823 EOF 00:30:04.823 )") 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:04.823 { 00:30:04.823 "params": { 00:30:04.823 "name": "Nvme$subsystem", 00:30:04.823 "trtype": "$TEST_TRANSPORT", 00:30:04.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.823 "adrfam": "ipv4", 00:30:04.823 "trsvcid": "$NVMF_PORT", 00:30:04.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.823 "hdgst": ${hdgst:-false}, 00:30:04.823 "ddgst": ${ddgst:-false} 00:30:04.823 }, 00:30:04.823 "method": "bdev_nvme_attach_controller" 00:30:04.823 } 00:30:04.823 EOF 00:30:04.823 )") 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:04.823 { 00:30:04.823 "params": { 00:30:04.823 "name": "Nvme$subsystem", 00:30:04.823 "trtype": "$TEST_TRANSPORT", 00:30:04.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.823 "adrfam": "ipv4", 00:30:04.823 "trsvcid": "$NVMF_PORT", 00:30:04.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.823 "hdgst": ${hdgst:-false}, 00:30:04.823 "ddgst": ${ddgst:-false} 00:30:04.823 }, 00:30:04.823 "method": "bdev_nvme_attach_controller" 00:30:04.823 } 00:30:04.823 EOF 00:30:04.823 )") 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:30:04.823 13:41:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:04.823 "params": { 00:30:04.823 "name": "Nvme1", 00:30:04.823 "trtype": "tcp", 00:30:04.823 "traddr": "10.0.0.2", 00:30:04.823 "adrfam": "ipv4", 00:30:04.823 "trsvcid": "4420", 00:30:04.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:04.823 "hdgst": false, 00:30:04.823 "ddgst": false 00:30:04.823 }, 00:30:04.823 "method": "bdev_nvme_attach_controller" 00:30:04.823 },{ 00:30:04.823 "params": { 00:30:04.823 "name": "Nvme2", 00:30:04.823 "trtype": "tcp", 00:30:04.823 "traddr": "10.0.0.2", 00:30:04.823 "adrfam": "ipv4", 00:30:04.823 "trsvcid": "4420", 00:30:04.823 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:04.823 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:04.823 "hdgst": false, 00:30:04.823 "ddgst": false 00:30:04.823 }, 00:30:04.823 "method": "bdev_nvme_attach_controller" 00:30:04.823 },{ 00:30:04.823 "params": { 00:30:04.823 "name": "Nvme3", 00:30:04.823 "trtype": "tcp", 00:30:04.823 "traddr": "10.0.0.2", 00:30:04.823 "adrfam": "ipv4", 00:30:04.823 "trsvcid": "4420", 00:30:04.823 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:04.823 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:04.823 "hdgst": false, 00:30:04.823 "ddgst": false 00:30:04.823 }, 00:30:04.823 "method": "bdev_nvme_attach_controller" 00:30:04.823 },{ 00:30:04.823 "params": { 00:30:04.823 "name": "Nvme4", 00:30:04.823 "trtype": "tcp", 00:30:04.823 "traddr": "10.0.0.2", 00:30:04.823 "adrfam": "ipv4", 00:30:04.823 "trsvcid": "4420", 00:30:04.823 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:04.823 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:04.823 "hdgst": false, 00:30:04.823 "ddgst": false 00:30:04.823 }, 00:30:04.823 "method": "bdev_nvme_attach_controller" 00:30:04.823 },{ 00:30:04.823 "params": { 00:30:04.823 "name": "Nvme5", 00:30:04.823 "trtype": "tcp", 00:30:04.823 "traddr": "10.0.0.2", 00:30:04.823 "adrfam": "ipv4", 00:30:04.823 "trsvcid": "4420", 00:30:04.823 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:04.823 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:04.823 "hdgst": false, 00:30:04.823 "ddgst": false 00:30:04.823 }, 00:30:04.823 "method": "bdev_nvme_attach_controller" 00:30:04.823 },{ 00:30:04.823 "params": { 00:30:04.823 "name": "Nvme6", 00:30:04.823 "trtype": "tcp", 00:30:04.823 "traddr": "10.0.0.2", 00:30:04.823 "adrfam": "ipv4", 00:30:04.823 "trsvcid": "4420", 00:30:04.823 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:04.823 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:04.823 "hdgst": false, 00:30:04.823 "ddgst": false 00:30:04.823 }, 00:30:04.823 "method": "bdev_nvme_attach_controller" 00:30:04.823 },{ 00:30:04.823 "params": { 00:30:04.823 "name": "Nvme7", 00:30:04.823 "trtype": "tcp", 00:30:04.823 "traddr": "10.0.0.2", 00:30:04.823 "adrfam": "ipv4", 00:30:04.823 "trsvcid": "4420", 00:30:04.823 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:04.823 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:04.823 "hdgst": false, 00:30:04.823 "ddgst": false 00:30:04.823 }, 00:30:04.823 "method": "bdev_nvme_attach_controller" 00:30:04.823 },{ 00:30:04.823 "params": { 00:30:04.823 "name": "Nvme8", 00:30:04.823 "trtype": "tcp", 00:30:04.823 "traddr": "10.0.0.2", 00:30:04.823 "adrfam": "ipv4", 00:30:04.823 "trsvcid": "4420", 00:30:04.823 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:04.823 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:04.823 "hdgst": false, 00:30:04.823 "ddgst": false 00:30:04.823 }, 00:30:04.823 "method": "bdev_nvme_attach_controller" 00:30:04.823 },{ 00:30:04.823 "params": { 00:30:04.823 "name": "Nvme9", 00:30:04.823 "trtype": "tcp", 00:30:04.823 "traddr": "10.0.0.2", 00:30:04.823 "adrfam": "ipv4", 00:30:04.823 "trsvcid": "4420", 00:30:04.823 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:04.823 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:04.823 "hdgst": false, 00:30:04.823 "ddgst": false 00:30:04.823 }, 00:30:04.823 "method": "bdev_nvme_attach_controller" 00:30:04.823 },{ 00:30:04.823 "params": { 00:30:04.823 "name": "Nvme10", 00:30:04.823 "trtype": "tcp", 00:30:04.823 "traddr": "10.0.0.2", 00:30:04.823 "adrfam": "ipv4", 00:30:04.823 "trsvcid": "4420", 00:30:04.823 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:04.823 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:04.823 "hdgst": false, 00:30:04.823 "ddgst": false 00:30:04.823 }, 00:30:04.823 "method": "bdev_nvme_attach_controller" 00:30:04.823 }' 00:30:04.823 [2024-07-13 13:41:39.540514] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:04.823 [2024-07-13 13:41:39.540668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380449 ] 00:30:05.080 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.080 [2024-07-13 13:41:39.668940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.338 [2024-07-13 13:41:39.906961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.235 Running I/O for 10 seconds... 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:07.493 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:07.751 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.751 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=13 00:30:07.751 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 13 -ge 100 ']' 00:30:07.751 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:08.009 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:08.010 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:08.010 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:08.010 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:08.010 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.010 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:08.010 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.010 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:30:08.010 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:30:08.010 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 380138 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 380138 ']' 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 380138 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 380138 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 380138' 00:30:08.276 killing process with pid 380138 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 380138 00:30:08.276 13:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 380138 00:30:08.276 [2024-07-13 13:41:42.876397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.878667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.878710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.878732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.878750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.878769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.878787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.878806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.878824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.878842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.878860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.878888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.878918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.878936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.878955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.878972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.878990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.879861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.276 [2024-07-13 13:41:42.888937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.888955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.888972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.888989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.889007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.889046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.889065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.889083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.889100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.889118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.889136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.889153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.889170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.889211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.889228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.889255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.890814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.890849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.890876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.890930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.890949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.890967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.890986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.891986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.892003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:08.277 [2024-07-13 13:41:42.892977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.277 [2024-07-13 13:41:42.893035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.277 [2024-07-13 13:41:42.893080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.277 [2024-07-13 13:41:42.893104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.277 [2024-07-13 13:41:42.893130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.277 [2024-07-13 13:41:42.893151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.277 [2024-07-13 13:41:42.893175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.277 [2024-07-13 13:41:42.893221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.277 [2024-07-13 13:41:42.893246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.277 [2024-07-13 13:41:42.893268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.277 [2024-07-13 13:41:42.893291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.277 [2024-07-13 13:41:42.893312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.277 [2024-07-13 13:41:42.893335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.277 [2024-07-13 13:41:42.893356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.277 [2024-07-13 13:41:42.893379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.277 [2024-07-13 13:41:42.893401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.277 [2024-07-13 13:41:42.893424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.277 [2024-07-13 13:41:42.893445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.277 [2024-07-13 13:41:42.893474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.277 [2024-07-13 13:41:42.893497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.277 [2024-07-13 13:41:42.893519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.277 [2024-07-13 13:41:42.893541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.277 [2024-07-13 13:41:42.893563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.277 [2024-07-13 13:41:42.893585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.277 [2024-07-13 13:41:42.893607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.277 [2024-07-13 13:41:42.893629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.277 [2024-07-13 13:41:42.893651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.277 [2024-07-13 13:41:42.893672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.277 [2024-07-13 13:41:42.893694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.277 [2024-07-13 13:41:42.893716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.277 [2024-07-13 13:41:42.893739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.278 [2024-07-13 13:41:42.893760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.278 [2024-07-13 13:41:42.893783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.278 [2024-07-13 13:41:42.893804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.278 [2024-07-13 13:41:42.893863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.278 [2024-07-13 13:41:42.893895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.278 [2024-07-13 13:41:42.893928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.278 [2024-07-13 13:41:42.893950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.278 [2024-07-13 13:41:42.893974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.278 [2024-07-13 13:41:42.893996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.278 [2024-07-13 13:41:42.894020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.278 [2024-07-13 13:41:42.894042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.278 [2024-07-13 13:41:42.894065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.278 [2024-07-13 13:41:42.894092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.278 [2024-07-13 13:41:42.894117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.278 [2024-07-13 13:41:42.894139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.278 [2024-07-13 13:41:42.894174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.278 [2024-07-13 13:41:42.894211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.278 [2024-07-13 13:41:42.894235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.278 [2024-07-13 13:41:42.894256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.278 [2024-07-13 13:41:42.894279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.278 [2024-07-13 13:41:42.894300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.278 [2024-07-13 13:41:42.894323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.278 [2024-07-13 13:41:42.894344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.278 [2024-07-13 13:41:42.894367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.278 [2024-07-13 13:41:42.894390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.278 [2024-07-13 13:41:42.894412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.278 [2024-07-13 13:41:42.894433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.278 [2024-07-13 13:41:42.894457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.278 [2024-07-13 13:41:42.894478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.279 [2024-07-13 13:41:42.894476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.279 [2024-07-13 13:41:42.894509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.279 [2024-07-13 13:41:42.894529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1[2024-07-13 13:41:42.894548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.279 with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same [2024-07-13 13:41:42.894568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:08.279 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.279 [2024-07-13 13:41:42.894594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.279 [2024-07-13 13:41:42.894613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.279 [2024-07-13 13:41:42.894631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1[2024-07-13 13:41:42.894648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.279 with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same [2024-07-13 13:41:42.894668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:08.279 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.279 [2024-07-13 13:41:42.894686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.279 [2024-07-13 13:41:42.894704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.279 [2024-07-13 13:41:42.894721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same [2024-07-13 13:41:42.894737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1with the state(5) to be set 00:30:08.279 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.279 [2024-07-13 13:41:42.894758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.279 [2024-07-13 13:41:42.894776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.279 [2024-07-13 13:41:42.894794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.279 [2024-07-13 13:41:42.894812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same [2024-07-13 13:41:42.894829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1with the state(5) to be set 00:30:08.279 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.279 [2024-07-13 13:41:42.894853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.279 [2024-07-13 13:41:42.894895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128[2024-07-13 13:41:42.894922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.279 with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same [2024-07-13 13:41:42.894943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:08.279 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.279 [2024-07-13 13:41:42.894962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.279 [2024-07-13 13:41:42.894981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.894990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.279 [2024-07-13 13:41:42.895000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.895014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.279 [2024-07-13 13:41:42.895020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.895035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.279 [2024-07-13 13:41:42.895039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.895057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.895059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.279 [2024-07-13 13:41:42.895076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.895081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.279 [2024-07-13 13:41:42.895094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.895105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.279 [2024-07-13 13:41:42.895112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.895126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.279 [2024-07-13 13:41:42.895131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.895150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.279 [2024-07-13 13:41:42.895155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.895172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.279 [2024-07-13 13:41:42.895175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.279 [2024-07-13 13:41:42.895208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.280 [2024-07-13 13:41:42.895226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.280 [2024-07-13 13:41:42.895244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.280 [2024-07-13 13:41:42.895262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 13:41:42.895280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.280 with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.280 [2024-07-13 13:41:42.895316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.280 [2024-07-13 13:41:42.895333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128[2024-07-13 13:41:42.895350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.280 with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same [2024-07-13 13:41:42.895370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:08.280 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.280 [2024-07-13 13:41:42.895389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.280 [2024-07-13 13:41:42.895406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.280 [2024-07-13 13:41:42.895424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.280 [2024-07-13 13:41:42.895458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.280 [2024-07-13 13:41:42.895477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.280 [2024-07-13 13:41:42.895496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 13:41:42.895514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.280 with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.280 [2024-07-13 13:41:42.895551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.280 [2024-07-13 13:41:42.895569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:12[2024-07-13 13:41:42.895586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.280 with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same [2024-07-13 13:41:42.895606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:08.280 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.280 [2024-07-13 13:41:42.895625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.280 [2024-07-13 13:41:42.895643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.280 [2024-07-13 13:41:42.895660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same [2024-07-13 13:41:42.895676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:12with the state(5) to be set 00:30:08.280 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.280 [2024-07-13 13:41:42.895705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.280 [2024-07-13 13:41:42.895700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:08.280 [2024-07-13 13:41:42.895730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.280 [2024-07-13 13:41:42.895753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.280 [2024-07-13 13:41:42.895777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.280 [2024-07-13 13:41:42.895798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.280 [2024-07-13 13:41:42.895822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.895862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.895908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.895931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.895959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.895982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.896006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.896028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.896052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.896074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.896098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.896119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.896143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.896169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.896261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.281 [2024-07-13 13:41:42.896558] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f8900 was disconnected and freed. reset controller. 00:30:08.281 [2024-07-13 13:41:42.896889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.896932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.896963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.896992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.897965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.897988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.898011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.898035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.898057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.898081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.898103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.898127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.898150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.898196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.898219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.898244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.898286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.898312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.898335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.898359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.898383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.898406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.898428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.898452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.898474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.898497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.898527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.898552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.898573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.898597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.898619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.898643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.898665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.281 [2024-07-13 13:41:42.898688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.281 [2024-07-13 13:41:42.898711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.898734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.898756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.898780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.898801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.898825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.898861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.898899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.898939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.898964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.898986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-13 13:41:42.899311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1with the state(5) to be set 00:30:08.282 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-13 13:41:42.899366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1with the state(5) to be set 00:30:08.282 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-13 13:41:42.899458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128with the state(5) to be set 00:30:08.282 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 13:41:42.899669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128[2024-07-13 13:41:42.899741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-13 13:41:42.899760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:08.282 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.282 [2024-07-13 13:41:42.899929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.282 [2024-07-13 13:41:42.899935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.282 [2024-07-13 13:41:42.899948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.899959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.283 [2024-07-13 13:41:42.899967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.899981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.899985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.283 [2024-07-13 13:41:42.900021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.900039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.283 [2024-07-13 13:41:42.900075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.900093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.283 [2024-07-13 13:41:42.900111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 13:41:42.900129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devi[2024-07-13 13:41:42.900216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same ce or address) on qpair id 1 00:30:08.283 with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.900513] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f8b80 was disconnected and freed. reset controller. 00:30:08.283 [2024-07-13 13:41:42.901294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.901328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.901355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.901375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.901396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.901416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.901437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.901457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.901476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.901553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.901581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.901604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.901624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.901653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.901675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.901696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.901716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.901736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.901802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.901829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.901851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.901880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.901915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.901935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.901962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.901984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.902003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.902070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.902098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.902121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.902154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.902182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.902203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.902224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.902245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.902264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.902323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.902351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.902374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.902400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.902423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.902444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.902465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.902485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.902504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.902561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.902588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.902611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.902632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.902654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.902674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.902695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.902715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.902734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.902756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.902791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.902797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.283 [2024-07-13 13:41:42.902810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.902825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.283 [2024-07-13 13:41:42.902828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.283 [2024-07-13 13:41:42.902847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.902850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.284 [2024-07-13 13:41:42.902872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.902878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.284 [2024-07-13 13:41:42.902893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-13 13:41:42.902901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nswith the state(5) to be set 00:30:08.284 id:0 cdw10:00000000 cdw11:00000000 00:30:08.284 [2024-07-13 13:41:42.902927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-13 13:41:42.902929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(5) to be set 00:30:08.284 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.284 [2024-07-13 13:41:42.902949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.902953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.284 [2024-07-13 13:41:42.902967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.902973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.284 [2024-07-13 13:41:42.902985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.902992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-13 13:41:42.903057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nswith the state(5) to be set 00:30:08.284 id:0 cdw10:00000000 cdw11:00000000 00:30:08.284 [2024-07-13 13:41:42.903142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.284 [2024-07-13 13:41:42.903143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-13 13:41:42.903165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same id:0 cdw10:00000000 cdw11:00000000 00:30:08.284 with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-13 13:41:42.903191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(5) to be set 00:30:08.284 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.284 [2024-07-13 13:41:42.903210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.284 [2024-07-13 13:41:42.903229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.284 [2024-07-13 13:41:42.903248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.284 [2024-07-13 13:41:42.903270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.284 [2024-07-13 13:41:42.903289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.284 [2024-07-13 13:41:42.903409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.284 [2024-07-13 13:41:42.903427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.284 [2024-07-13 13:41:42.903463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.284 [2024-07-13 13:41:42.903482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.284 [2024-07-13 13:41:42.903500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.284 [2024-07-13 13:41:42.903524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.284 [2024-07-13 13:41:42.903542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.284 [2024-07-13 13:41:42.903560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.284 [2024-07-13 13:41:42.903633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.903990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.904007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.904024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.904040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.904061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.905986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.285 [2024-07-13 13:41:42.906545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.286 [2024-07-13 13:41:42.906562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.286 [2024-07-13 13:41:42.906580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.286 [2024-07-13 13:41:42.906596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.286 [2024-07-13 13:41:42.906613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.286 [2024-07-13 13:41:42.906630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.286 [2024-07-13 13:41:42.906646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.286 [2024-07-13 13:41:42.906663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.286 [2024-07-13 13:41:42.906737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:30:08.286 [2024-07-13 13:41:42.907422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:08.286 [2024-07-13 13:41:42.907471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:08.286 [2024-07-13 13:41:42.907509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:30:08.286 [2024-07-13 13:41:42.907544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:30:08.286 [2024-07-13 13:41:42.907616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.907648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.907700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.907727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.907754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.907776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.907801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.907823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.907873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.907918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.907949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.907972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.907996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.908968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.908989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.909013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.909034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.909058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.909080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.909103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.286 [2024-07-13 13:41:42.909126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.286 [2024-07-13 13:41:42.909170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.287 [2024-07-13 13:41:42.909202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.287 [2024-07-13 13:41:42.909226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.287 [2024-07-13 13:41:42.909247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.287 [2024-07-13 13:41:42.909270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.287 [2024-07-13 13:41:42.909292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.287 [2024-07-13 13:41:42.909315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.287 [2024-07-13 13:41:42.909337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.287 [2024-07-13 13:41:42.909360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.287 [2024-07-13 13:41:42.909381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.287 [2024-07-13 13:41:42.909404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.287 [2024-07-13 13:41:42.909431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.287 [2024-07-13 13:41:42.909455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.287 [2024-07-13 13:41:42.909488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.287 [2024-07-13 13:41:42.909515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.287 [2024-07-13 13:41:42.909537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.287 [2024-07-13 13:41:42.909561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.287 [2024-07-13 13:41:42.909582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.287 [2024-07-13 13:41:42.909605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.287 [2024-07-13 13:41:42.909626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.287 [2024-07-13 13:41:42.909650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.287 [2024-07-13 13:41:42.909671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.287 [2024-07-13 13:41:42.909695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.287 [2024-07-13 13:41:42.909716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.287 [2024-07-13 13:41:42.909739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.287 [2024-07-13 13:41:42.909764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.287 [2024-07-13 13:41:42.909789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.287 [2024-07-13 13:41:42.909811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.287 [2024-07-13 13:41:42.909834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.287 [2024-07-13 13:41:42.909877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.438428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.438525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.438555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.438579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.438604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.438626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.438650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.438674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.438698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.438720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.438744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.438766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.438789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.438813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.438838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.438860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.438905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.438928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.438952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.438974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.439005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.439028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.439052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.439074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.439098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.439119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.439143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.439165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.439189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.439211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.439234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.439257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.439280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.439302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.439326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.439348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.439372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.868 [2024-07-13 13:41:43.439394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.439749] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f8680 was disconnected and freed. reset controller. 00:30:08.868 [2024-07-13 13:41:43.440603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:30:08.868 [2024-07-13 13:41:43.440673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:30:08.868 [2024-07-13 13:41:43.440721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:30:08.868 [2024-07-13 13:41:43.440770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:08.868 [2024-07-13 13:41:43.440810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:30:08.868 [2024-07-13 13:41:43.440854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:30:08.868 [2024-07-13 13:41:43.440911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:30:08.868 [2024-07-13 13:41:43.441003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.868 [2024-07-13 13:41:43.441034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.441057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.868 [2024-07-13 13:41:43.441078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.441099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.868 [2024-07-13 13:41:43.441119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.868 [2024-07-13 13:41:43.441140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:08.868 [2024-07-13 13:41:43.441160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.441179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(5) to be set 00:30:08.869 [2024-07-13 13:41:43.444035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.869 [2024-07-13 13:41:43.444349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.869 [2024-07-13 13:41:43.444391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2c80 with addr=10.0.0.2, port=4420 00:30:08.869 [2024-07-13 13:41:43.444426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:30:08.869 [2024-07-13 13:41:43.444637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.869 [2024-07-13 13:41:43.444674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=4420 00:30:08.869 [2024-07-13 13:41:43.444698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:30:08.869 [2024-07-13 13:41:43.444860] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:08.869 [2024-07-13 13:41:43.444973] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:08.869 [2024-07-13 13:41:43.445066] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:08.869 [2024-07-13 13:41:43.445167] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:08.869 [2024-07-13 13:41:43.445412] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:08.869 [2024-07-13 13:41:43.445624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.869 [2024-07-13 13:41:43.445662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:30:08.869 [2024-07-13 13:41:43.445698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:30:08.869 [2024-07-13 13:41:43.445725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:30:08.869 [2024-07-13 13:41:43.445757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:30:08.869 [2024-07-13 13:41:43.446524] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:08.869 [2024-07-13 13:41:43.446679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:08.869 [2024-07-13 13:41:43.446716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:08.869 [2024-07-13 13:41:43.446755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:08.869 [2024-07-13 13:41:43.446779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:08.869 [2024-07-13 13:41:43.446813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:08.869 [2024-07-13 13:41:43.446835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:08.869 [2024-07-13 13:41:43.446855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:08.869 [2024-07-13 13:41:43.447022] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:08.869 [2024-07-13 13:41:43.447072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.869 [2024-07-13 13:41:43.447099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.869 [2024-07-13 13:41:43.447117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.869 [2024-07-13 13:41:43.447135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.869 [2024-07-13 13:41:43.447154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.869 [2024-07-13 13:41:43.447272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.869 [2024-07-13 13:41:43.450601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:30:08.869 [2024-07-13 13:41:43.450817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.450879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.450940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.450966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.450993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.451961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.451984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.452006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.452029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.452051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.452074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.452096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.452119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.452141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.452173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.869 [2024-07-13 13:41:43.452195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.869 [2024-07-13 13:41:43.452218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.452240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.452264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.452285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.452310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.452331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.452370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.452391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.452415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.452437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.452460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.452485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.452509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.452531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.452554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.452575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.452598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.452620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.452643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.452665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.452688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.452709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.452732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.452753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.452775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.452797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.452820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.452855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.452927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.452950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.452974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.452996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.453967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.453989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.454010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8e00 is same with the state(5) to be set 00:30:08.870 [2024-07-13 13:41:43.455578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.455609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.455638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.455661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.455686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.455707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.455730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.455752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.870 [2024-07-13 13:41:43.455775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.870 [2024-07-13 13:41:43.455796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.455820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.455862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.455900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.455923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.455947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.455969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.455993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.456973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.456994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.871 [2024-07-13 13:41:43.457828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.871 [2024-07-13 13:41:43.457863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.457903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.457926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.457951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.457972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.457996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.458017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.458041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.458063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.458087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.458108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.458132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.458168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.458193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.458215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.458238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.458263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.458287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.458308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.458331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.458352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.458375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.458396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.458419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.458440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.458464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.458484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.458507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.458528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.458552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.458572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.458595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.458616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.458637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9080 is same with the state(5) to be set 00:30:08.872 [2024-07-13 13:41:43.460199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.460230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.460260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.460283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.460307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.460328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.460351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.460376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.460400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.460422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.460445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.460466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.460489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.460510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.460532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.460554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.460577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.460598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.460621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.460641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.460665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.460697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.460723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.460744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.460768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.460789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.460812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.460833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.460881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.460913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.460936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.460958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.460986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.461009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.461033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.872 [2024-07-13 13:41:43.461055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.872 [2024-07-13 13:41:43.461078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.461962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.461984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.462972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.462995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.463017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.463041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.873 [2024-07-13 13:41:43.463062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.873 [2024-07-13 13:41:43.463086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.463107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.463130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.463152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.463175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.463212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.463236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9300 is same with the state(5) to be set 00:30:08.874 [2024-07-13 13:41:43.464747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.464777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.464807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.464830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.464888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.464919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.464944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.464967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.464991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.465964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.465987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.466009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.466032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.466054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.466082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.466104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.466128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.466150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.466188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.466210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.466234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.466256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.466279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.466301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.466324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.466345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.466368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.466389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.466412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.466434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.466457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.466478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.466501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.466521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.466544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.466565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.466588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.874 [2024-07-13 13:41:43.466609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.874 [2024-07-13 13:41:43.466631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.466656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.466680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.466701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.466725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.466745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.466768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.466788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.466811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.466832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.466877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.466901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.466925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.466947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.466970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.466991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.467766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.467788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9580 is same with the state(5) to be set 00:30:08.875 [2024-07-13 13:41:43.469353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.469382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.469411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.469434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.469457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.469479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.469501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.469522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.469544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.469565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.469587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.469608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.469630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.469651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.469673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.469693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.469716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.469737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.469760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.469793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.469818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.469839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.469887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.469911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.469934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.469960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.469984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.470005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.470028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.470049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.470074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.470096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.875 [2024-07-13 13:41:43.470120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.875 [2024-07-13 13:41:43.470141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.470973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.470997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.876 [2024-07-13 13:41:43.471814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.876 [2024-07-13 13:41:43.471835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.471857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.471916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.471942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.471964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.471987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.472009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.472033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.472054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.472077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.472098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.472121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.472142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.472166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.472201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.472225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.472246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.472269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.472294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.472318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.472339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.472359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9800 is same with the state(5) to be set 00:30:08.877 [2024-07-13 13:41:43.473897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.473929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.473960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.473984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.474957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.474978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.475009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.475032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.475056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.475077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.475102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.475124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.475147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.475169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.475192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.475213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.475237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.475258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.475281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.475303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.475326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.475348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.475371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.877 [2024-07-13 13:41:43.475394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.877 [2024-07-13 13:41:43.475418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.475439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.475463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.475485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.475509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.475530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.475554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.475579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.475604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.475626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.475650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.475672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.475696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.475718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.475741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.475763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.475787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.475809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.475834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.475855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.475889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.475917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.475942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.475965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.475990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.878 [2024-07-13 13:41:43.476901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.878 [2024-07-13 13:41:43.476924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9d00 is same with the state(5) to be set 00:30:08.878 [2024-07-13 13:41:43.482606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:08.878 [2024-07-13 13:41:43.482700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:08.878 [2024-07-13 13:41:43.482729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:08.878 [2024-07-13 13:41:43.482893] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.878 [2024-07-13 13:41:43.482937] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.878 [2024-07-13 13:41:43.482977] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.878 [2024-07-13 13:41:43.483016] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.878 [2024-07-13 13:41:43.483045] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.878 [2024-07-13 13:41:43.483246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:08.878 [2024-07-13 13:41:43.483282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:08.878 [2024-07-13 13:41:43.483344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:08.878 [2024-07-13 13:41:43.483376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:08.878 [2024-07-13 13:41:43.483402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:08.878 [2024-07-13 13:41:43.483780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.878 [2024-07-13 13:41:43.483825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3b80 with addr=10.0.0.2, port=4420 00:30:08.878 [2024-07-13 13:41:43.483853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:30:08.878 [2024-07-13 13:41:43.484031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.878 [2024-07-13 13:41:43.484078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:30:08.878 [2024-07-13 13:41:43.484102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:30:08.878 [2024-07-13 13:41:43.484279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.878 [2024-07-13 13:41:43.484314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4a80 with addr=10.0.0.2, port=4420 00:30:08.878 [2024-07-13 13:41:43.484345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:30:08.878 [2024-07-13 13:41:43.487526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.878 [2024-07-13 13:41:43.487577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5200 with addr=10.0.0.2, port=4420 00:30:08.878 [2024-07-13 13:41:43.487602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:30:08.879 [2024-07-13 13:41:43.487793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.879 [2024-07-13 13:41:43.487828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5980 with addr=10.0.0.2, port=4420 00:30:08.879 [2024-07-13 13:41:43.487851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:30:08.879 [2024-07-13 13:41:43.488262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.879 [2024-07-13 13:41:43.488296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6880 with addr=10.0.0.2, port=4420 00:30:08.879 [2024-07-13 13:41:43.488319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:30:08.879 [2024-07-13 13:41:43.488483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.879 [2024-07-13 13:41:43.488516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=4420 00:30:08.879 [2024-07-13 13:41:43.488539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:30:08.879 [2024-07-13 13:41:43.488687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.879 [2024-07-13 13:41:43.488721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2c80 with addr=10.0.0.2, port=4420 00:30:08.879 [2024-07-13 13:41:43.488744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:30:08.879 [2024-07-13 13:41:43.488780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:30:08.879 [2024-07-13 13:41:43.488816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:30:08.879 [2024-07-13 13:41:43.488845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:30:08.879 [2024-07-13 13:41:43.489143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.489239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.489289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.489334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.489379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.489431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.489476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.489520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.489565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.489610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.489654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.489698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.489742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.489788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.489833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.489905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.489951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.489973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.879 [2024-07-13 13:41:43.490806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.879 [2024-07-13 13:41:43.490830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.490852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.490883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.490907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.490932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.490953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.490979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.491955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.491977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.492001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.492022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.492046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.492068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.492092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.492114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.492139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.492160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.492200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.880 [2024-07-13 13:41:43.492222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:08.880 [2024-07-13 13:41:43.492243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9a80 is same with the state(5) to be set 00:30:08.880 [2024-07-13 13:41:43.497084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:08.880 task offset: 27904 on job bdev=Nvme2n1 fails 00:30:08.880 00:30:08.880 Latency(us) 00:30:08.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.880 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.880 Job: Nvme1n1 ended in about 1.59 seconds with error 00:30:08.880 Verification LBA range: start 0x0 length 0x400 00:30:08.880 Nvme1n1 : 1.59 120.66 7.54 40.22 0.00 395084.61 25243.50 568561.21 00:30:08.880 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.880 Job: Nvme2n1 ended in about 1.06 seconds with error 00:30:08.880 Verification LBA range: start 0x0 length 0x400 00:30:08.880 Nvme2n1 : 1.06 181.84 11.37 60.61 0.00 256347.69 11893.57 293601.28 00:30:08.880 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.880 Job: Nvme3n1 ended in about 1.06 seconds with error 00:30:08.880 Verification LBA range: start 0x0 length 0x400 00:30:08.880 Nvme3n1 : 1.06 181.64 11.35 60.55 0.00 251778.37 9126.49 301368.51 00:30:08.880 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.880 Job: Nvme4n1 ended in about 1.60 seconds with error 00:30:08.880 Verification LBA range: start 0x0 length 0x400 00:30:08.880 Nvme4n1 : 1.60 119.69 7.48 39.90 0.00 383623.02 20097.71 683516.21 00:30:08.880 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.880 Job: Nvme5n1 ended in about 1.61 seconds with error 00:30:08.880 Verification LBA range: start 0x0 length 0x400 00:30:08.880 Nvme5n1 : 1.61 79.57 4.97 39.78 0.00 506592.27 26408.58 748760.94 00:30:08.880 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.880 Job: Nvme6n1 ended in about 1.61 seconds with error 00:30:08.880 Verification LBA range: start 0x0 length 0x400 00:30:08.880 Nvme6n1 : 1.61 79.34 4.96 39.67 0.00 501834.33 27573.67 807791.88 00:30:08.880 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.881 Job: Nvme7n1 ended in about 1.62 seconds with error 00:30:08.881 Verification LBA range: start 0x0 length 0x400 00:30:08.881 Nvme7n1 : 1.62 79.12 4.94 39.56 0.00 496788.16 22136.60 807791.88 00:30:08.881 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.881 Job: Nvme8n1 ended in about 1.62 seconds with error 00:30:08.881 Verification LBA range: start 0x0 length 0x400 00:30:08.881 Nvme8n1 : 1.62 81.98 5.12 39.45 0.00 479002.98 34952.53 820219.45 00:30:08.881 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.881 Job: Nvme9n1 ended in about 1.64 seconds with error 00:30:08.881 Verification LBA range: start 0x0 length 0x400 00:30:08.881 Nvme9n1 : 1.64 77.94 4.87 38.97 0.00 491757.67 26991.12 689729.99 00:30:08.881 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:08.881 Job: Nvme10n1 ended in about 1.63 seconds with error 00:30:08.881 Verification LBA range: start 0x0 length 0x400 00:30:08.881 Nvme10n1 : 1.63 78.67 4.92 39.34 0.00 480382.67 24660.95 730119.59 00:30:08.881 =================================================================================================================== 00:30:08.881 Total : 1080.44 67.53 438.04 0.00 412400.36 9126.49 820219.45 00:30:08.881 [2024-07-13 13:41:43.580202] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:08.881 [2024-07-13 13:41:43.580331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:08.881 [2024-07-13 13:41:43.580472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:30:08.881 [2024-07-13 13:41:43.580514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:30:08.881 [2024-07-13 13:41:43.580543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:30:08.881 [2024-07-13 13:41:43.580570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:30:08.881 [2024-07-13 13:41:43.580599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:30:08.881 [2024-07-13 13:41:43.580623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:08.881 [2024-07-13 13:41:43.580644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:08.881 [2024-07-13 13:41:43.580667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:08.881 [2024-07-13 13:41:43.580716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:08.881 [2024-07-13 13:41:43.580738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:08.881 [2024-07-13 13:41:43.580758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:08.881 [2024-07-13 13:41:43.580786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:08.881 [2024-07-13 13:41:43.580806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:08.881 [2024-07-13 13:41:43.580824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:08.881 [2024-07-13 13:41:43.580933] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.881 [2024-07-13 13:41:43.580969] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.881 [2024-07-13 13:41:43.580996] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.881 [2024-07-13 13:41:43.581030] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.881 [2024-07-13 13:41:43.581058] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.881 [2024-07-13 13:41:43.581085] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.881 [2024-07-13 13:41:43.581113] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.881 [2024-07-13 13:41:43.581141] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.881 [2024-07-13 13:41:43.581366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.881 [2024-07-13 13:41:43.581394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.881 [2024-07-13 13:41:43.581411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.881 [2024-07-13 13:41:43.581804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.881 [2024-07-13 13:41:43.581850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:30:08.881 [2024-07-13 13:41:43.581898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:30:08.881 [2024-07-13 13:41:43.582082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.881 [2024-07-13 13:41:43.582118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:30:08.881 [2024-07-13 13:41:43.582141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(5) to be set 00:30:08.881 [2024-07-13 13:41:43.582163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:08.881 [2024-07-13 13:41:43.582181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:08.881 [2024-07-13 13:41:43.582200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:08.881 [2024-07-13 13:41:43.582237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:08.881 [2024-07-13 13:41:43.582260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:08.881 [2024-07-13 13:41:43.582296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:08.881 [2024-07-13 13:41:43.582323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:08.881 [2024-07-13 13:41:43.582349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:08.881 [2024-07-13 13:41:43.582370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:08.881 [2024-07-13 13:41:43.582397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:08.881 [2024-07-13 13:41:43.582418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:08.881 [2024-07-13 13:41:43.582437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:08.881 [2024-07-13 13:41:43.582462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:08.881 [2024-07-13 13:41:43.582483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:08.881 [2024-07-13 13:41:43.582502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:08.881 [2024-07-13 13:41:43.582571] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.881 [2024-07-13 13:41:43.582604] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.881 [2024-07-13 13:41:43.582630] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.881 [2024-07-13 13:41:43.582656] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.881 [2024-07-13 13:41:43.582681] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:08.881 [2024-07-13 13:41:43.583289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.881 [2024-07-13 13:41:43.583321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.881 [2024-07-13 13:41:43.583339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.881 [2024-07-13 13:41:43.583357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.881 [2024-07-13 13:41:43.583374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.881 [2024-07-13 13:41:43.583413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:08.881 [2024-07-13 13:41:43.583447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:30:08.881 [2024-07-13 13:41:43.583570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:08.881 [2024-07-13 13:41:43.583606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:08.881 [2024-07-13 13:41:43.583664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:08.881 [2024-07-13 13:41:43.583689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:08.881 [2024-07-13 13:41:43.583709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:08.881 [2024-07-13 13:41:43.583736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:08.881 [2024-07-13 13:41:43.583759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:08.881 [2024-07-13 13:41:43.583778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:08.881 [2024-07-13 13:41:43.583838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:08.881 [2024-07-13 13:41:43.583905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.881 [2024-07-13 13:41:43.583930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.881 [2024-07-13 13:41:43.584143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.881 [2024-07-13 13:41:43.584179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4a80 with addr=10.0.0.2, port=4420 00:30:08.881 [2024-07-13 13:41:43.584203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:30:08.881 [2024-07-13 13:41:43.584408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.881 [2024-07-13 13:41:43.584442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:30:08.881 [2024-07-13 13:41:43.584465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:30:08.881 [2024-07-13 13:41:43.584702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.881 [2024-07-13 13:41:43.584737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3b80 with addr=10.0.0.2, port=4420 00:30:08.881 [2024-07-13 13:41:43.584760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:30:08.881 [2024-07-13 13:41:43.584788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:30:08.881 [2024-07-13 13:41:43.584817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:30:08.881 [2024-07-13 13:41:43.584902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:30:08.881 [2024-07-13 13:41:43.584934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:08.881 [2024-07-13 13:41:43.584956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:08.881 [2024-07-13 13:41:43.584975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:08.882 [2024-07-13 13:41:43.585002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:08.882 [2024-07-13 13:41:43.585024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:08.882 [2024-07-13 13:41:43.585043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:08.882 [2024-07-13 13:41:43.585108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.882 [2024-07-13 13:41:43.585134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:08.882 [2024-07-13 13:41:43.585154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:08.882 [2024-07-13 13:41:43.585174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:08.882 [2024-07-13 13:41:43.585193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:08.882 [2024-07-13 13:41:43.585252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.406 13:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:30:11.406 13:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 380449 00:30:12.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (380449) - No such process 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:12.341 rmmod nvme_tcp 00:30:12.341 rmmod nvme_fabrics 00:30:12.341 rmmod nvme_keyring 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:12.341 13:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.245 13:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:14.245 00:30:14.245 real 0m11.864s 00:30:14.245 user 0m34.514s 00:30:14.245 sys 0m2.094s 00:30:14.245 13:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:14.245 13:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:14.245 ************************************ 00:30:14.245 END TEST nvmf_shutdown_tc3 00:30:14.245 ************************************ 00:30:14.504 13:41:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:30:14.504 13:41:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:30:14.504 00:30:14.504 real 0m42.678s 00:30:14.504 user 2m13.854s 00:30:14.504 sys 0m8.457s 00:30:14.504 13:41:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:14.504 13:41:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:14.504 ************************************ 00:30:14.504 END TEST nvmf_shutdown 00:30:14.504 ************************************ 00:30:14.504 13:41:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:14.504 13:41:49 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:30:14.504 13:41:49 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:14.504 13:41:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:14.504 13:41:49 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:30:14.504 13:41:49 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:14.504 13:41:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:14.504 13:41:49 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:30:14.504 13:41:49 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:14.504 13:41:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:14.504 13:41:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:14.504 13:41:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:14.504 ************************************ 00:30:14.504 START TEST nvmf_multicontroller 00:30:14.504 ************************************ 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:14.504 * Looking for test storage... 00:30:14.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.504 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:30:14.505 13:41:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:16.407 13:41:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:16.407 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:16.407 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:16.407 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:16.407 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.407 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.408 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:16.408 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.408 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.408 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.408 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:16.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:30:16.408 00:30:16.408 --- 10.0.0.2 ping statistics --- 00:30:16.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.408 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:30:16.408 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:30:16.408 00:30:16.408 --- 10.0.0.1 ping statistics --- 00:30:16.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.408 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:30:16.408 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.408 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:30:16.408 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:16.408 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.408 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:16.408 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:16.408 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.408 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:16.408 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:16.666 13:41:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:16.666 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:16.666 13:41:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:16.666 13:41:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.666 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=383233 00:30:16.667 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:16.667 13:41:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 383233 00:30:16.667 13:41:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 383233 ']' 00:30:16.667 13:41:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.667 13:41:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:16.667 13:41:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.667 13:41:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:16.667 13:41:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.667 [2024-07-13 13:41:51.262070] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:16.667 [2024-07-13 13:41:51.262220] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.667 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.667 [2024-07-13 13:41:51.399138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:16.924 [2024-07-13 13:41:51.657244] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.925 [2024-07-13 13:41:51.657327] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.925 [2024-07-13 13:41:51.657361] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.925 [2024-07-13 13:41:51.657387] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.925 [2024-07-13 13:41:51.657410] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.925 [2024-07-13 13:41:51.657826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.925 [2024-07-13 13:41:51.657888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.925 [2024-07-13 13:41:51.657907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:17.490 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:17.490 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:30:17.490 13:41:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:17.490 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:17.490 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.750 [2024-07-13 13:41:52.253265] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.750 Malloc0 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.750 [2024-07-13 13:41:52.371933] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.750 [2024-07-13 13:41:52.379774] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.750 Malloc1 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=383398 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 383398 /var/tmp/bdevperf.sock 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 383398 ']' 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:17.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:17.750 13:41:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.162 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:19.162 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:30:19.162 13:41:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:19.162 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.162 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.162 NVMe0n1 00:30:19.162 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.162 13:41:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:19.162 13:41:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:19.162 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.162 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.162 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.163 1 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.163 request: 00:30:19.163 { 00:30:19.163 "name": "NVMe0", 00:30:19.163 "trtype": "tcp", 00:30:19.163 "traddr": "10.0.0.2", 00:30:19.163 "adrfam": "ipv4", 00:30:19.163 "trsvcid": "4420", 00:30:19.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.163 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:19.163 "hostaddr": "10.0.0.2", 00:30:19.163 "hostsvcid": "60000", 00:30:19.163 "prchk_reftag": false, 00:30:19.163 "prchk_guard": false, 00:30:19.163 "hdgst": false, 00:30:19.163 "ddgst": false, 00:30:19.163 "method": "bdev_nvme_attach_controller", 00:30:19.163 "req_id": 1 00:30:19.163 } 00:30:19.163 Got JSON-RPC error response 00:30:19.163 response: 00:30:19.163 { 00:30:19.163 "code": -114, 00:30:19.163 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:19.163 } 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.163 request: 00:30:19.163 { 00:30:19.163 "name": "NVMe0", 00:30:19.163 "trtype": "tcp", 00:30:19.163 "traddr": "10.0.0.2", 00:30:19.163 "adrfam": "ipv4", 00:30:19.163 "trsvcid": "4420", 00:30:19.163 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:19.163 "hostaddr": "10.0.0.2", 00:30:19.163 "hostsvcid": "60000", 00:30:19.163 "prchk_reftag": false, 00:30:19.163 "prchk_guard": false, 00:30:19.163 "hdgst": false, 00:30:19.163 "ddgst": false, 00:30:19.163 "method": "bdev_nvme_attach_controller", 00:30:19.163 "req_id": 1 00:30:19.163 } 00:30:19.163 Got JSON-RPC error response 00:30:19.163 response: 00:30:19.163 { 00:30:19.163 "code": -114, 00:30:19.163 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:19.163 } 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.163 request: 00:30:19.163 { 00:30:19.163 "name": "NVMe0", 00:30:19.163 "trtype": "tcp", 00:30:19.163 "traddr": "10.0.0.2", 00:30:19.163 "adrfam": "ipv4", 00:30:19.163 "trsvcid": "4420", 00:30:19.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.163 "hostaddr": "10.0.0.2", 00:30:19.163 "hostsvcid": "60000", 00:30:19.163 "prchk_reftag": false, 00:30:19.163 "prchk_guard": false, 00:30:19.163 "hdgst": false, 00:30:19.163 "ddgst": false, 00:30:19.163 "multipath": "disable", 00:30:19.163 "method": "bdev_nvme_attach_controller", 00:30:19.163 "req_id": 1 00:30:19.163 } 00:30:19.163 Got JSON-RPC error response 00:30:19.163 response: 00:30:19.163 { 00:30:19.163 "code": -114, 00:30:19.163 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:30:19.163 } 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:19.163 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:19.164 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:19.164 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:19.164 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.164 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:19.164 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:19.164 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:19.164 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.164 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.164 request: 00:30:19.164 { 00:30:19.164 "name": "NVMe0", 00:30:19.164 "trtype": "tcp", 00:30:19.164 "traddr": "10.0.0.2", 00:30:19.164 "adrfam": "ipv4", 00:30:19.164 "trsvcid": "4420", 00:30:19.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.164 "hostaddr": "10.0.0.2", 00:30:19.164 "hostsvcid": "60000", 00:30:19.164 "prchk_reftag": false, 00:30:19.164 "prchk_guard": false, 00:30:19.164 "hdgst": false, 00:30:19.164 "ddgst": false, 00:30:19.164 "multipath": "failover", 00:30:19.164 "method": "bdev_nvme_attach_controller", 00:30:19.164 "req_id": 1 00:30:19.164 } 00:30:19.164 Got JSON-RPC error response 00:30:19.164 response: 00:30:19.164 { 00:30:19.164 "code": -114, 00:30:19.164 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:19.164 } 00:30:19.164 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:19.164 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:19.164 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:19.164 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:19.164 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:19.164 13:41:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:19.164 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.164 13:41:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.422 00:30:19.422 13:41:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.422 13:41:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:19.422 13:41:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.422 13:41:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.422 13:41:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.422 13:41:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:19.422 13:41:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.422 13:41:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.422 00:30:19.422 13:41:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.422 13:41:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:19.422 13:41:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:19.422 13:41:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.422 13:41:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.422 13:41:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.422 13:41:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:19.422 13:41:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:20.795 0 00:30:20.796 13:41:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:20.796 13:41:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.796 13:41:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:20.796 13:41:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.796 13:41:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 383398 00:30:20.796 13:41:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 383398 ']' 00:30:20.796 13:41:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 383398 00:30:20.796 13:41:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:30:20.796 13:41:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:20.796 13:41:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 383398 00:30:20.796 13:41:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:20.796 13:41:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:20.796 13:41:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 383398' 00:30:20.796 killing process with pid 383398 00:30:20.796 13:41:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 383398 00:30:20.796 13:41:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 383398 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:30:21.730 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:21.730 [2024-07-13 13:41:52.569945] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:21.730 [2024-07-13 13:41:52.570107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383398 ] 00:30:21.730 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.730 [2024-07-13 13:41:52.695629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.730 [2024-07-13 13:41:52.935721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.730 [2024-07-13 13:41:54.136907] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 41589a73-114c-47ea-b983-f68154906b19 already exists 00:30:21.730 [2024-07-13 13:41:54.136973] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:41589a73-114c-47ea-b983-f68154906b19 alias for bdev NVMe1n1 00:30:21.730 [2024-07-13 13:41:54.136999] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:21.730 Running I/O for 1 seconds... 00:30:21.730 00:30:21.730 Latency(us) 00:30:21.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.730 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:21.730 NVMe0n1 : 1.01 13518.51 52.81 0.00 0.00 9451.74 2536.49 18252.99 00:30:21.730 =================================================================================================================== 00:30:21.730 Total : 13518.51 52.81 0.00 0.00 9451.74 2536.49 18252.99 00:30:21.730 Received shutdown signal, test time was about 1.000000 seconds 00:30:21.730 00:30:21.730 Latency(us) 00:30:21.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.730 =================================================================================================================== 00:30:21.730 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:21.730 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:21.730 rmmod nvme_tcp 00:30:21.730 rmmod nvme_fabrics 00:30:21.730 rmmod nvme_keyring 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 383233 ']' 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 383233 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 383233 ']' 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 383233 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 383233 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 383233' 00:30:21.730 killing process with pid 383233 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 383233 00:30:21.730 13:41:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 383233 00:30:23.631 13:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:23.631 13:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:23.631 13:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:23.631 13:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:23.631 13:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:23.631 13:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.631 13:41:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:23.631 13:41:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.531 13:42:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:25.531 00:30:25.531 real 0m10.952s 00:30:25.531 user 0m22.872s 00:30:25.531 sys 0m2.530s 00:30:25.531 13:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:25.531 13:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:25.531 ************************************ 00:30:25.531 END TEST nvmf_multicontroller 00:30:25.531 ************************************ 00:30:25.531 13:42:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:25.531 13:42:00 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:25.531 13:42:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:25.531 13:42:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:25.531 13:42:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:25.531 ************************************ 00:30:25.532 START TEST nvmf_aer 00:30:25.532 ************************************ 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:25.532 * Looking for test storage... 00:30:25.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:30:25.532 13:42:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:27.432 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:27.433 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:27.433 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:27.433 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:27.433 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:27.433 13:42:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:27.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:27.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:30:27.433 00:30:27.433 --- 10.0.0.2 ping statistics --- 00:30:27.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.433 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:27.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:27.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:30:27.433 00:30:27.433 --- 10.0.0.1 ping statistics --- 00:30:27.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.433 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=386048 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 386048 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 386048 ']' 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:27.433 13:42:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:27.693 [2024-07-13 13:42:02.215704] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:27.693 [2024-07-13 13:42:02.215863] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.693 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.693 [2024-07-13 13:42:02.366115] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:27.948 [2024-07-13 13:42:02.665619] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:27.948 [2024-07-13 13:42:02.665681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:27.948 [2024-07-13 13:42:02.665714] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:27.948 [2024-07-13 13:42:02.665730] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:27.948 [2024-07-13 13:42:02.665746] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:27.948 [2024-07-13 13:42:02.665893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.948 [2024-07-13 13:42:02.666015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:27.948 [2024-07-13 13:42:02.666370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:27.948 [2024-07-13 13:42:02.666406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.512 [2024-07-13 13:42:03.160313] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.512 Malloc0 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.512 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.769 [2024-07-13 13:42:03.265442] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.769 [ 00:30:28.769 { 00:30:28.769 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:28.769 "subtype": "Discovery", 00:30:28.769 "listen_addresses": [], 00:30:28.769 "allow_any_host": true, 00:30:28.769 "hosts": [] 00:30:28.769 }, 00:30:28.769 { 00:30:28.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.769 "subtype": "NVMe", 00:30:28.769 "listen_addresses": [ 00:30:28.769 { 00:30:28.769 "trtype": "TCP", 00:30:28.769 "adrfam": "IPv4", 00:30:28.769 "traddr": "10.0.0.2", 00:30:28.769 "trsvcid": "4420" 00:30:28.769 } 00:30:28.769 ], 00:30:28.769 "allow_any_host": true, 00:30:28.769 "hosts": [], 00:30:28.769 "serial_number": "SPDK00000000000001", 00:30:28.769 "model_number": "SPDK bdev Controller", 00:30:28.769 "max_namespaces": 2, 00:30:28.769 "min_cntlid": 1, 00:30:28.769 "max_cntlid": 65519, 00:30:28.769 "namespaces": [ 00:30:28.769 { 00:30:28.769 "nsid": 1, 00:30:28.769 "bdev_name": "Malloc0", 00:30:28.769 "name": "Malloc0", 00:30:28.769 "nguid": "CEC411468B7E4EB4BA899E7160174F3F", 00:30:28.769 "uuid": "cec41146-8b7e-4eb4-ba89-9e7160174f3f" 00:30:28.769 } 00:30:28.769 ] 00:30:28.769 } 00:30:28.769 ] 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=386232 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:28.769 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:30:28.769 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:29.035 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:29.035 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 3 -lt 200 ']' 00:30:29.035 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=4 00:30:29.035 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:29.035 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:29.035 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:29.035 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:30:29.035 13:42:03 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:29.035 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.035 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.302 Malloc1 00:30:29.302 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.302 13:42:03 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:29.302 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.302 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.302 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.302 13:42:03 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:29.302 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.302 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.302 [ 00:30:29.302 { 00:30:29.302 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:29.302 "subtype": "Discovery", 00:30:29.302 "listen_addresses": [], 00:30:29.302 "allow_any_host": true, 00:30:29.302 "hosts": [] 00:30:29.302 }, 00:30:29.302 { 00:30:29.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:29.302 "subtype": "NVMe", 00:30:29.302 "listen_addresses": [ 00:30:29.302 { 00:30:29.302 "trtype": "TCP", 00:30:29.302 "adrfam": "IPv4", 00:30:29.302 "traddr": "10.0.0.2", 00:30:29.302 "trsvcid": "4420" 00:30:29.302 } 00:30:29.302 ], 00:30:29.302 "allow_any_host": true, 00:30:29.302 "hosts": [], 00:30:29.302 "serial_number": "SPDK00000000000001", 00:30:29.302 "model_number": "SPDK bdev Controller", 00:30:29.302 "max_namespaces": 2, 00:30:29.302 "min_cntlid": 1, 00:30:29.302 "max_cntlid": 65519, 00:30:29.302 "namespaces": [ 00:30:29.302 { 00:30:29.302 "nsid": 1, 00:30:29.302 "bdev_name": "Malloc0", 00:30:29.302 "name": "Malloc0", 00:30:29.302 "nguid": "CEC411468B7E4EB4BA899E7160174F3F", 00:30:29.302 "uuid": "cec41146-8b7e-4eb4-ba89-9e7160174f3f" 00:30:29.302 }, 00:30:29.302 { 00:30:29.302 "nsid": 2, 00:30:29.302 "bdev_name": "Malloc1", 00:30:29.302 "name": "Malloc1", 00:30:29.302 "nguid": "21D185C357F0409DBD6EC92645FBDB03", 00:30:29.302 "uuid": "21d185c3-57f0-409d-bd6e-c92645fbdb03" 00:30:29.302 } 00:30:29.302 ] 00:30:29.302 } 00:30:29.302 ] 00:30:29.302 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.302 13:42:03 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 386232 00:30:29.302 Asynchronous Event Request test 00:30:29.302 Attaching to 10.0.0.2 00:30:29.302 Attached to 10.0.0.2 00:30:29.302 Registering asynchronous event callbacks... 00:30:29.302 Starting namespace attribute notice tests for all controllers... 00:30:29.302 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:29.302 aer_cb - Changed Namespace 00:30:29.302 Cleaning up... 00:30:29.302 13:42:03 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:29.302 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.302 13:42:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:29.560 13:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:29.817 rmmod nvme_tcp 00:30:29.817 rmmod nvme_fabrics 00:30:29.817 rmmod nvme_keyring 00:30:29.817 13:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:29.817 13:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:30:29.817 13:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:30:29.817 13:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 386048 ']' 00:30:29.817 13:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 386048 00:30:29.817 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 386048 ']' 00:30:29.817 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 386048 00:30:29.817 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:30:29.817 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:29.817 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 386048 00:30:29.817 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:29.817 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:29.817 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 386048' 00:30:29.817 killing process with pid 386048 00:30:29.817 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 386048 00:30:29.817 13:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 386048 00:30:31.189 13:42:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:31.189 13:42:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:31.189 13:42:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:31.189 13:42:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:31.189 13:42:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:31.190 13:42:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.190 13:42:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:31.190 13:42:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.089 13:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:33.089 00:30:33.089 real 0m7.638s 00:30:33.089 user 0m11.110s 00:30:33.089 sys 0m2.143s 00:30:33.089 13:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:33.089 13:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:33.089 ************************************ 00:30:33.089 END TEST nvmf_aer 00:30:33.089 ************************************ 00:30:33.089 13:42:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:33.089 13:42:07 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:33.089 13:42:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:33.089 13:42:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:33.089 13:42:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:33.089 ************************************ 00:30:33.089 START TEST nvmf_async_init 00:30:33.089 ************************************ 00:30:33.089 13:42:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:33.089 * Looking for test storage... 00:30:33.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:33.089 13:42:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.089 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3a345fb405284c0885a3b63ad10579ac 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:33.090 13:42:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.348 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:33.348 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:33.348 13:42:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:30:33.348 13:42:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:35.246 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:35.246 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:35.246 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:35.246 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.246 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:35.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:30:35.247 00:30:35.247 --- 10.0.0.2 ping statistics --- 00:30:35.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.247 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:30:35.247 00:30:35.247 --- 10.0.0.1 ping statistics --- 00:30:35.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.247 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=388836 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 388836 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 388836 ']' 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:35.247 13:42:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:35.247 [2024-07-13 13:42:09.817769] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:35.247 [2024-07-13 13:42:09.817936] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.247 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.247 [2024-07-13 13:42:09.951501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.505 [2024-07-13 13:42:10.208047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.505 [2024-07-13 13:42:10.208131] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.505 [2024-07-13 13:42:10.208158] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.505 [2024-07-13 13:42:10.208183] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.505 [2024-07-13 13:42:10.208204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.505 [2024-07-13 13:42:10.208260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.070 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:36.070 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:30:36.070 13:42:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:36.070 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:36.070 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.070 13:42:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.070 13:42:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:36.070 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.070 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.070 [2024-07-13 13:42:10.810029] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.070 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.070 13:42:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.331 null0 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3a345fb405284c0885a3b63ad10579ac 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.331 [2024-07-13 13:42:10.850288] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.331 13:42:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.612 nvme0n1 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.612 [ 00:30:36.612 { 00:30:36.612 "name": "nvme0n1", 00:30:36.612 "aliases": [ 00:30:36.612 "3a345fb4-0528-4c08-85a3-b63ad10579ac" 00:30:36.612 ], 00:30:36.612 "product_name": "NVMe disk", 00:30:36.612 "block_size": 512, 00:30:36.612 "num_blocks": 2097152, 00:30:36.612 "uuid": "3a345fb4-0528-4c08-85a3-b63ad10579ac", 00:30:36.612 "assigned_rate_limits": { 00:30:36.612 "rw_ios_per_sec": 0, 00:30:36.612 "rw_mbytes_per_sec": 0, 00:30:36.612 "r_mbytes_per_sec": 0, 00:30:36.612 "w_mbytes_per_sec": 0 00:30:36.612 }, 00:30:36.612 "claimed": false, 00:30:36.612 "zoned": false, 00:30:36.612 "supported_io_types": { 00:30:36.612 "read": true, 00:30:36.612 "write": true, 00:30:36.612 "unmap": false, 00:30:36.612 "flush": true, 00:30:36.612 "reset": true, 00:30:36.612 "nvme_admin": true, 00:30:36.612 "nvme_io": true, 00:30:36.612 "nvme_io_md": false, 00:30:36.612 "write_zeroes": true, 00:30:36.612 "zcopy": false, 00:30:36.612 "get_zone_info": false, 00:30:36.612 "zone_management": false, 00:30:36.612 "zone_append": false, 00:30:36.612 "compare": true, 00:30:36.612 "compare_and_write": true, 00:30:36.612 "abort": true, 00:30:36.612 "seek_hole": false, 00:30:36.612 "seek_data": false, 00:30:36.612 "copy": true, 00:30:36.612 "nvme_iov_md": false 00:30:36.612 }, 00:30:36.612 "memory_domains": [ 00:30:36.612 { 00:30:36.612 "dma_device_id": "system", 00:30:36.612 "dma_device_type": 1 00:30:36.612 } 00:30:36.612 ], 00:30:36.612 "driver_specific": { 00:30:36.612 "nvme": [ 00:30:36.612 { 00:30:36.612 "trid": { 00:30:36.612 "trtype": "TCP", 00:30:36.612 "adrfam": "IPv4", 00:30:36.612 "traddr": "10.0.0.2", 00:30:36.612 "trsvcid": "4420", 00:30:36.612 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:36.612 }, 00:30:36.612 "ctrlr_data": { 00:30:36.612 "cntlid": 1, 00:30:36.612 "vendor_id": "0x8086", 00:30:36.612 "model_number": "SPDK bdev Controller", 00:30:36.612 "serial_number": "00000000000000000000", 00:30:36.612 "firmware_revision": "24.09", 00:30:36.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:36.612 "oacs": { 00:30:36.612 "security": 0, 00:30:36.612 "format": 0, 00:30:36.612 "firmware": 0, 00:30:36.612 "ns_manage": 0 00:30:36.612 }, 00:30:36.612 "multi_ctrlr": true, 00:30:36.612 "ana_reporting": false 00:30:36.612 }, 00:30:36.612 "vs": { 00:30:36.612 "nvme_version": "1.3" 00:30:36.612 }, 00:30:36.612 "ns_data": { 00:30:36.612 "id": 1, 00:30:36.612 "can_share": true 00:30:36.612 } 00:30:36.612 } 00:30:36.612 ], 00:30:36.612 "mp_policy": "active_passive" 00:30:36.612 } 00:30:36.612 } 00:30:36.612 ] 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.612 [2024-07-13 13:42:11.106608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:36.612 [2024-07-13 13:42:11.106719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:30:36.612 [2024-07-13 13:42:11.239119] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.612 [ 00:30:36.612 { 00:30:36.612 "name": "nvme0n1", 00:30:36.612 "aliases": [ 00:30:36.612 "3a345fb4-0528-4c08-85a3-b63ad10579ac" 00:30:36.612 ], 00:30:36.612 "product_name": "NVMe disk", 00:30:36.612 "block_size": 512, 00:30:36.612 "num_blocks": 2097152, 00:30:36.612 "uuid": "3a345fb4-0528-4c08-85a3-b63ad10579ac", 00:30:36.612 "assigned_rate_limits": { 00:30:36.612 "rw_ios_per_sec": 0, 00:30:36.612 "rw_mbytes_per_sec": 0, 00:30:36.612 "r_mbytes_per_sec": 0, 00:30:36.612 "w_mbytes_per_sec": 0 00:30:36.612 }, 00:30:36.612 "claimed": false, 00:30:36.612 "zoned": false, 00:30:36.612 "supported_io_types": { 00:30:36.612 "read": true, 00:30:36.612 "write": true, 00:30:36.612 "unmap": false, 00:30:36.612 "flush": true, 00:30:36.612 "reset": true, 00:30:36.612 "nvme_admin": true, 00:30:36.612 "nvme_io": true, 00:30:36.612 "nvme_io_md": false, 00:30:36.612 "write_zeroes": true, 00:30:36.612 "zcopy": false, 00:30:36.612 "get_zone_info": false, 00:30:36.612 "zone_management": false, 00:30:36.612 "zone_append": false, 00:30:36.612 "compare": true, 00:30:36.612 "compare_and_write": true, 00:30:36.612 "abort": true, 00:30:36.612 "seek_hole": false, 00:30:36.612 "seek_data": false, 00:30:36.612 "copy": true, 00:30:36.612 "nvme_iov_md": false 00:30:36.612 }, 00:30:36.612 "memory_domains": [ 00:30:36.612 { 00:30:36.612 "dma_device_id": "system", 00:30:36.612 "dma_device_type": 1 00:30:36.612 } 00:30:36.612 ], 00:30:36.612 "driver_specific": { 00:30:36.612 "nvme": [ 00:30:36.612 { 00:30:36.612 "trid": { 00:30:36.612 "trtype": "TCP", 00:30:36.612 "adrfam": "IPv4", 00:30:36.612 "traddr": "10.0.0.2", 00:30:36.612 "trsvcid": "4420", 00:30:36.612 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:36.612 }, 00:30:36.612 "ctrlr_data": { 00:30:36.612 "cntlid": 2, 00:30:36.612 "vendor_id": "0x8086", 00:30:36.612 "model_number": "SPDK bdev Controller", 00:30:36.612 "serial_number": "00000000000000000000", 00:30:36.612 "firmware_revision": "24.09", 00:30:36.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:36.612 "oacs": { 00:30:36.612 "security": 0, 00:30:36.612 "format": 0, 00:30:36.612 "firmware": 0, 00:30:36.612 "ns_manage": 0 00:30:36.612 }, 00:30:36.612 "multi_ctrlr": true, 00:30:36.612 "ana_reporting": false 00:30:36.612 }, 00:30:36.612 "vs": { 00:30:36.612 "nvme_version": "1.3" 00:30:36.612 }, 00:30:36.612 "ns_data": { 00:30:36.612 "id": 1, 00:30:36.612 "can_share": true 00:30:36.612 } 00:30:36.612 } 00:30:36.612 ], 00:30:36.612 "mp_policy": "active_passive" 00:30:36.612 } 00:30:36.612 } 00:30:36.612 ] 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.VQ2YB2QfTE 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.VQ2YB2QfTE 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.612 [2024-07-13 13:42:11.295386] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:36.612 [2024-07-13 13:42:11.295596] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VQ2YB2QfTE 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.612 [2024-07-13 13:42:11.303380] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VQ2YB2QfTE 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.612 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.612 [2024-07-13 13:42:11.311405] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:36.612 [2024-07-13 13:42:11.311506] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:36.870 nvme0n1 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.871 [ 00:30:36.871 { 00:30:36.871 "name": "nvme0n1", 00:30:36.871 "aliases": [ 00:30:36.871 "3a345fb4-0528-4c08-85a3-b63ad10579ac" 00:30:36.871 ], 00:30:36.871 "product_name": "NVMe disk", 00:30:36.871 "block_size": 512, 00:30:36.871 "num_blocks": 2097152, 00:30:36.871 "uuid": "3a345fb4-0528-4c08-85a3-b63ad10579ac", 00:30:36.871 "assigned_rate_limits": { 00:30:36.871 "rw_ios_per_sec": 0, 00:30:36.871 "rw_mbytes_per_sec": 0, 00:30:36.871 "r_mbytes_per_sec": 0, 00:30:36.871 "w_mbytes_per_sec": 0 00:30:36.871 }, 00:30:36.871 "claimed": false, 00:30:36.871 "zoned": false, 00:30:36.871 "supported_io_types": { 00:30:36.871 "read": true, 00:30:36.871 "write": true, 00:30:36.871 "unmap": false, 00:30:36.871 "flush": true, 00:30:36.871 "reset": true, 00:30:36.871 "nvme_admin": true, 00:30:36.871 "nvme_io": true, 00:30:36.871 "nvme_io_md": false, 00:30:36.871 "write_zeroes": true, 00:30:36.871 "zcopy": false, 00:30:36.871 "get_zone_info": false, 00:30:36.871 "zone_management": false, 00:30:36.871 "zone_append": false, 00:30:36.871 "compare": true, 00:30:36.871 "compare_and_write": true, 00:30:36.871 "abort": true, 00:30:36.871 "seek_hole": false, 00:30:36.871 "seek_data": false, 00:30:36.871 "copy": true, 00:30:36.871 "nvme_iov_md": false 00:30:36.871 }, 00:30:36.871 "memory_domains": [ 00:30:36.871 { 00:30:36.871 "dma_device_id": "system", 00:30:36.871 "dma_device_type": 1 00:30:36.871 } 00:30:36.871 ], 00:30:36.871 "driver_specific": { 00:30:36.871 "nvme": [ 00:30:36.871 { 00:30:36.871 "trid": { 00:30:36.871 "trtype": "TCP", 00:30:36.871 "adrfam": "IPv4", 00:30:36.871 "traddr": "10.0.0.2", 00:30:36.871 "trsvcid": "4421", 00:30:36.871 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:36.871 }, 00:30:36.871 "ctrlr_data": { 00:30:36.871 "cntlid": 3, 00:30:36.871 "vendor_id": "0x8086", 00:30:36.871 "model_number": "SPDK bdev Controller", 00:30:36.871 "serial_number": "00000000000000000000", 00:30:36.871 "firmware_revision": "24.09", 00:30:36.871 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:36.871 "oacs": { 00:30:36.871 "security": 0, 00:30:36.871 "format": 0, 00:30:36.871 "firmware": 0, 00:30:36.871 "ns_manage": 0 00:30:36.871 }, 00:30:36.871 "multi_ctrlr": true, 00:30:36.871 "ana_reporting": false 00:30:36.871 }, 00:30:36.871 "vs": { 00:30:36.871 "nvme_version": "1.3" 00:30:36.871 }, 00:30:36.871 "ns_data": { 00:30:36.871 "id": 1, 00:30:36.871 "can_share": true 00:30:36.871 } 00:30:36.871 } 00:30:36.871 ], 00:30:36.871 "mp_policy": "active_passive" 00:30:36.871 } 00:30:36.871 } 00:30:36.871 ] 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.VQ2YB2QfTE 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:36.871 rmmod nvme_tcp 00:30:36.871 rmmod nvme_fabrics 00:30:36.871 rmmod nvme_keyring 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 388836 ']' 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 388836 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 388836 ']' 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 388836 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 388836 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 388836' 00:30:36.871 killing process with pid 388836 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 388836 00:30:36.871 [2024-07-13 13:42:11.491564] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:36.871 [2024-07-13 13:42:11.491620] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:36.871 13:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 388836 00:30:38.247 13:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:38.247 13:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:38.247 13:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:38.247 13:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:38.247 13:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:38.247 13:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.247 13:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:38.247 13:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.165 13:42:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:40.165 00:30:40.165 real 0m7.093s 00:30:40.165 user 0m3.941s 00:30:40.165 sys 0m1.812s 00:30:40.165 13:42:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:40.165 13:42:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.165 ************************************ 00:30:40.165 END TEST nvmf_async_init 00:30:40.165 ************************************ 00:30:40.165 13:42:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:40.165 13:42:14 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:40.165 13:42:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:40.165 13:42:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:40.165 13:42:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:40.165 ************************************ 00:30:40.165 START TEST dma 00:30:40.165 ************************************ 00:30:40.165 13:42:14 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:40.424 * Looking for test storage... 00:30:40.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:40.424 13:42:14 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.424 13:42:14 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.424 13:42:14 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.424 13:42:14 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.424 13:42:14 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.424 13:42:14 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.424 13:42:14 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.424 13:42:14 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:30:40.424 13:42:14 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:40.424 13:42:14 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:40.424 13:42:14 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:40.424 13:42:14 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:30:40.424 00:30:40.424 real 0m0.061s 00:30:40.424 user 0m0.032s 00:30:40.424 sys 0m0.033s 00:30:40.424 13:42:14 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:40.424 13:42:14 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:30:40.424 ************************************ 00:30:40.424 END TEST dma 00:30:40.424 ************************************ 00:30:40.424 13:42:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:40.424 13:42:14 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:40.424 13:42:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:40.424 13:42:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:40.424 13:42:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:40.424 ************************************ 00:30:40.424 START TEST nvmf_identify 00:30:40.424 ************************************ 00:30:40.424 13:42:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:40.424 * Looking for test storage... 00:30:40.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.424 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:30:40.425 13:42:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:42.328 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:42.328 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:42.328 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:42.328 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.328 13:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.328 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.328 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.328 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:42.328 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:42.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:30:42.587 00:30:42.587 --- 10.0.0.2 ping statistics --- 00:30:42.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.587 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:30:42.587 00:30:42.587 --- 10.0.0.1 ping statistics --- 00:30:42.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.587 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=391221 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 391221 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 391221 ']' 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:42.587 13:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:42.587 [2024-07-13 13:42:17.212342] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:42.587 [2024-07-13 13:42:17.212488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.587 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.845 [2024-07-13 13:42:17.343595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:43.102 [2024-07-13 13:42:17.598594] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.102 [2024-07-13 13:42:17.598662] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.102 [2024-07-13 13:42:17.598699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.102 [2024-07-13 13:42:17.598719] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.102 [2024-07-13 13:42:17.598750] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.102 [2024-07-13 13:42:17.598891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.102 [2024-07-13 13:42:17.598952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.102 [2024-07-13 13:42:17.599033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.102 [2024-07-13 13:42:17.599042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.667 [2024-07-13 13:42:18.162448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.667 Malloc0 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.667 [2024-07-13 13:42:18.292373] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.667 [ 00:30:43.667 { 00:30:43.667 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:43.667 "subtype": "Discovery", 00:30:43.667 "listen_addresses": [ 00:30:43.667 { 00:30:43.667 "trtype": "TCP", 00:30:43.667 "adrfam": "IPv4", 00:30:43.667 "traddr": "10.0.0.2", 00:30:43.667 "trsvcid": "4420" 00:30:43.667 } 00:30:43.667 ], 00:30:43.667 "allow_any_host": true, 00:30:43.667 "hosts": [] 00:30:43.667 }, 00:30:43.667 { 00:30:43.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:43.667 "subtype": "NVMe", 00:30:43.667 "listen_addresses": [ 00:30:43.667 { 00:30:43.667 "trtype": "TCP", 00:30:43.667 "adrfam": "IPv4", 00:30:43.667 "traddr": "10.0.0.2", 00:30:43.667 "trsvcid": "4420" 00:30:43.667 } 00:30:43.667 ], 00:30:43.667 "allow_any_host": true, 00:30:43.667 "hosts": [], 00:30:43.667 "serial_number": "SPDK00000000000001", 00:30:43.667 "model_number": "SPDK bdev Controller", 00:30:43.667 "max_namespaces": 32, 00:30:43.667 "min_cntlid": 1, 00:30:43.667 "max_cntlid": 65519, 00:30:43.667 "namespaces": [ 00:30:43.667 { 00:30:43.667 "nsid": 1, 00:30:43.667 "bdev_name": "Malloc0", 00:30:43.667 "name": "Malloc0", 00:30:43.667 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:43.667 "eui64": "ABCDEF0123456789", 00:30:43.667 "uuid": "8ae83cb1-b372-499f-b4d2-db59cf632517" 00:30:43.667 } 00:30:43.667 ] 00:30:43.667 } 00:30:43.667 ] 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.667 13:42:18 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:43.667 [2024-07-13 13:42:18.363426] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:43.667 [2024-07-13 13:42:18.363544] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391373 ] 00:30:43.667 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.927 [2024-07-13 13:42:18.428656] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:30:43.927 [2024-07-13 13:42:18.428780] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:43.927 [2024-07-13 13:42:18.428801] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:43.927 [2024-07-13 13:42:18.428833] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:43.927 [2024-07-13 13:42:18.428857] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:43.927 [2024-07-13 13:42:18.433169] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:30:43.927 [2024-07-13 13:42:18.433237] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:43.927 [2024-07-13 13:42:18.447893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:43.927 [2024-07-13 13:42:18.447925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:43.927 [2024-07-13 13:42:18.447941] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:43.927 [2024-07-13 13:42:18.447952] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:43.927 [2024-07-13 13:42:18.448026] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.448047] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.448066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:43.928 [2024-07-13 13:42:18.448099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:43.928 [2024-07-13 13:42:18.448139] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:43.928 [2024-07-13 13:42:18.455914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.928 [2024-07-13 13:42:18.455941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.928 [2024-07-13 13:42:18.455959] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.455974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:43.928 [2024-07-13 13:42:18.456005] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:43.928 [2024-07-13 13:42:18.456050] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:30:43.928 [2024-07-13 13:42:18.456067] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:30:43.928 [2024-07-13 13:42:18.456099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.456113] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.456126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:43.928 [2024-07-13 13:42:18.456147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.928 [2024-07-13 13:42:18.456197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:43.928 [2024-07-13 13:42:18.456388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.928 [2024-07-13 13:42:18.456410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.928 [2024-07-13 13:42:18.456422] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.456439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:43.928 [2024-07-13 13:42:18.456455] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:30:43.928 [2024-07-13 13:42:18.456476] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:30:43.928 [2024-07-13 13:42:18.456496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.456510] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.456522] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:43.928 [2024-07-13 13:42:18.456547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.928 [2024-07-13 13:42:18.456580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:43.928 [2024-07-13 13:42:18.456768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.928 [2024-07-13 13:42:18.456789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.928 [2024-07-13 13:42:18.456801] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.456815] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:43.928 [2024-07-13 13:42:18.456831] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:30:43.928 [2024-07-13 13:42:18.456883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:30:43.928 [2024-07-13 13:42:18.456906] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.456920] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.456932] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:43.928 [2024-07-13 13:42:18.456951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.928 [2024-07-13 13:42:18.456983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:43.928 [2024-07-13 13:42:18.457186] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.928 [2024-07-13 13:42:18.457208] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.928 [2024-07-13 13:42:18.457219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.457230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:43.928 [2024-07-13 13:42:18.457245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:43.928 [2024-07-13 13:42:18.457271] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.457291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.457304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:43.928 [2024-07-13 13:42:18.457327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.928 [2024-07-13 13:42:18.457357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:43.928 [2024-07-13 13:42:18.457507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.928 [2024-07-13 13:42:18.457533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.928 [2024-07-13 13:42:18.457551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.457563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:43.928 [2024-07-13 13:42:18.457578] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:30:43.928 [2024-07-13 13:42:18.457592] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:30:43.928 [2024-07-13 13:42:18.457613] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:43.928 [2024-07-13 13:42:18.457730] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:30:43.928 [2024-07-13 13:42:18.457744] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:43.928 [2024-07-13 13:42:18.457777] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.457791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.457802] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:43.928 [2024-07-13 13:42:18.457826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.928 [2024-07-13 13:42:18.457884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:43.928 [2024-07-13 13:42:18.458071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.928 [2024-07-13 13:42:18.458091] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.928 [2024-07-13 13:42:18.458102] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.458113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:43.928 [2024-07-13 13:42:18.458128] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:43.928 [2024-07-13 13:42:18.458155] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.458171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.458198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:43.928 [2024-07-13 13:42:18.458221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.928 [2024-07-13 13:42:18.458257] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:43.928 [2024-07-13 13:42:18.458446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.928 [2024-07-13 13:42:18.458470] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.928 [2024-07-13 13:42:18.458482] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.928 [2024-07-13 13:42:18.458493] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:43.928 [2024-07-13 13:42:18.458506] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:43.928 [2024-07-13 13:42:18.458536] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:30:43.929 [2024-07-13 13:42:18.458562] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:30:43.929 [2024-07-13 13:42:18.458588] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:30:43.929 [2024-07-13 13:42:18.458614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.458632] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:43.929 [2024-07-13 13:42:18.458654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.929 [2024-07-13 13:42:18.458684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:43.929 [2024-07-13 13:42:18.458912] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.929 [2024-07-13 13:42:18.458940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.929 [2024-07-13 13:42:18.458953] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.458966] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:43.929 [2024-07-13 13:42:18.458980] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:43.929 [2024-07-13 13:42:18.458993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.459023] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.459044] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.503892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.929 [2024-07-13 13:42:18.503922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.929 [2024-07-13 13:42:18.503934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.503946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:43.929 [2024-07-13 13:42:18.503977] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:30:43.929 [2024-07-13 13:42:18.503994] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:30:43.929 [2024-07-13 13:42:18.504007] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:30:43.929 [2024-07-13 13:42:18.504022] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:30:43.929 [2024-07-13 13:42:18.504038] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:30:43.929 [2024-07-13 13:42:18.504053] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:30:43.929 [2024-07-13 13:42:18.504076] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:30:43.929 [2024-07-13 13:42:18.504117] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.504136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.504165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:43.929 [2024-07-13 13:42:18.504191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:43.929 [2024-07-13 13:42:18.504227] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:43.929 [2024-07-13 13:42:18.504410] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.929 [2024-07-13 13:42:18.504430] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.929 [2024-07-13 13:42:18.504442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.504453] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:43.929 [2024-07-13 13:42:18.504472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.504486] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.504504] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:43.929 [2024-07-13 13:42:18.504524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.929 [2024-07-13 13:42:18.504549] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.504561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.504571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:43.929 [2024-07-13 13:42:18.504587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.929 [2024-07-13 13:42:18.504603] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.504615] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.504625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:43.929 [2024-07-13 13:42:18.504641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.929 [2024-07-13 13:42:18.504656] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.504668] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.504693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:43.929 [2024-07-13 13:42:18.504708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.929 [2024-07-13 13:42:18.504721] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:30:43.929 [2024-07-13 13:42:18.504764] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:43.929 [2024-07-13 13:42:18.504789] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.504802] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:43.929 [2024-07-13 13:42:18.504821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.929 [2024-07-13 13:42:18.504877] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:43.929 [2024-07-13 13:42:18.504897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:43.929 [2024-07-13 13:42:18.504910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:43.929 [2024-07-13 13:42:18.504922] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:43.929 [2024-07-13 13:42:18.504934] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:43.929 [2024-07-13 13:42:18.505118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.929 [2024-07-13 13:42:18.505140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.929 [2024-07-13 13:42:18.505151] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.505162] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:43.929 [2024-07-13 13:42:18.505193] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:30:43.929 [2024-07-13 13:42:18.505208] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:30:43.929 [2024-07-13 13:42:18.505242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.505258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:43.929 [2024-07-13 13:42:18.505282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.929 [2024-07-13 13:42:18.505313] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:43.929 [2024-07-13 13:42:18.505519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.929 [2024-07-13 13:42:18.505548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.929 [2024-07-13 13:42:18.505562] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.505574] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:43.929 [2024-07-13 13:42:18.505588] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:43.929 [2024-07-13 13:42:18.505600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.505619] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.505632] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.929 [2024-07-13 13:42:18.505710] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.929 [2024-07-13 13:42:18.505732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.930 [2024-07-13 13:42:18.505744] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.505756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:43.930 [2024-07-13 13:42:18.505798] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:30:43.930 [2024-07-13 13:42:18.505902] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.505922] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:43.930 [2024-07-13 13:42:18.505947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.930 [2024-07-13 13:42:18.505968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.505981] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.505993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:43.930 [2024-07-13 13:42:18.506010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.930 [2024-07-13 13:42:18.506043] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:43.930 [2024-07-13 13:42:18.506061] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:43.930 [2024-07-13 13:42:18.506390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.930 [2024-07-13 13:42:18.506411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.930 [2024-07-13 13:42:18.506429] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.506441] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:30:43.930 [2024-07-13 13:42:18.506453] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:30:43.930 [2024-07-13 13:42:18.506465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.506482] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.506496] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.506514] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.930 [2024-07-13 13:42:18.506530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.930 [2024-07-13 13:42:18.506541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.506553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:43.930 [2024-07-13 13:42:18.547023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.930 [2024-07-13 13:42:18.547052] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.930 [2024-07-13 13:42:18.547065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.547076] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:43.930 [2024-07-13 13:42:18.547114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.547131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:43.930 [2024-07-13 13:42:18.547160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.930 [2024-07-13 13:42:18.547204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:43.930 [2024-07-13 13:42:18.547421] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.930 [2024-07-13 13:42:18.547444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.930 [2024-07-13 13:42:18.547455] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.547466] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:30:43.930 [2024-07-13 13:42:18.547478] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:30:43.930 [2024-07-13 13:42:18.547489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.547505] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.547517] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.547582] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.930 [2024-07-13 13:42:18.547600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.930 [2024-07-13 13:42:18.547611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.547622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:43.930 [2024-07-13 13:42:18.547648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.547664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:43.930 [2024-07-13 13:42:18.547683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.930 [2024-07-13 13:42:18.547737] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:43.930 [2024-07-13 13:42:18.551897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.930 [2024-07-13 13:42:18.551921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.930 [2024-07-13 13:42:18.551933] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.551943] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:30:43.930 [2024-07-13 13:42:18.551955] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:30:43.930 [2024-07-13 13:42:18.551966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.551982] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.551994] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.589897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.930 [2024-07-13 13:42:18.589926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.930 [2024-07-13 13:42:18.589938] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.930 [2024-07-13 13:42:18.589950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:43.930 ===================================================== 00:30:43.930 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:43.930 ===================================================== 00:30:43.930 Controller Capabilities/Features 00:30:43.930 ================================ 00:30:43.930 Vendor ID: 0000 00:30:43.930 Subsystem Vendor ID: 0000 00:30:43.930 Serial Number: .................... 00:30:43.930 Model Number: ........................................ 00:30:43.930 Firmware Version: 24.09 00:30:43.930 Recommended Arb Burst: 0 00:30:43.930 IEEE OUI Identifier: 00 00 00 00:30:43.930 Multi-path I/O 00:30:43.930 May have multiple subsystem ports: No 00:30:43.930 May have multiple controllers: No 00:30:43.930 Associated with SR-IOV VF: No 00:30:43.930 Max Data Transfer Size: 131072 00:30:43.930 Max Number of Namespaces: 0 00:30:43.930 Max Number of I/O Queues: 1024 00:30:43.930 NVMe Specification Version (VS): 1.3 00:30:43.930 NVMe Specification Version (Identify): 1.3 00:30:43.930 Maximum Queue Entries: 128 00:30:43.930 Contiguous Queues Required: Yes 00:30:43.930 Arbitration Mechanisms Supported 00:30:43.930 Weighted Round Robin: Not Supported 00:30:43.930 Vendor Specific: Not Supported 00:30:43.930 Reset Timeout: 15000 ms 00:30:43.930 Doorbell Stride: 4 bytes 00:30:43.930 NVM Subsystem Reset: Not Supported 00:30:43.930 Command Sets Supported 00:30:43.930 NVM Command Set: Supported 00:30:43.930 Boot Partition: Not Supported 00:30:43.930 Memory Page Size Minimum: 4096 bytes 00:30:43.930 Memory Page Size Maximum: 4096 bytes 00:30:43.930 Persistent Memory Region: Not Supported 00:30:43.930 Optional Asynchronous Events Supported 00:30:43.930 Namespace Attribute Notices: Not Supported 00:30:43.930 Firmware Activation Notices: Not Supported 00:30:43.930 ANA Change Notices: Not Supported 00:30:43.930 PLE Aggregate Log Change Notices: Not Supported 00:30:43.930 LBA Status Info Alert Notices: Not Supported 00:30:43.930 EGE Aggregate Log Change Notices: Not Supported 00:30:43.930 Normal NVM Subsystem Shutdown event: Not Supported 00:30:43.930 Zone Descriptor Change Notices: Not Supported 00:30:43.930 Discovery Log Change Notices: Supported 00:30:43.930 Controller Attributes 00:30:43.930 128-bit Host Identifier: Not Supported 00:30:43.930 Non-Operational Permissive Mode: Not Supported 00:30:43.930 NVM Sets: Not Supported 00:30:43.930 Read Recovery Levels: Not Supported 00:30:43.930 Endurance Groups: Not Supported 00:30:43.930 Predictable Latency Mode: Not Supported 00:30:43.930 Traffic Based Keep ALive: Not Supported 00:30:43.930 Namespace Granularity: Not Supported 00:30:43.930 SQ Associations: Not Supported 00:30:43.931 UUID List: Not Supported 00:30:43.931 Multi-Domain Subsystem: Not Supported 00:30:43.931 Fixed Capacity Management: Not Supported 00:30:43.931 Variable Capacity Management: Not Supported 00:30:43.931 Delete Endurance Group: Not Supported 00:30:43.931 Delete NVM Set: Not Supported 00:30:43.931 Extended LBA Formats Supported: Not Supported 00:30:43.931 Flexible Data Placement Supported: Not Supported 00:30:43.931 00:30:43.931 Controller Memory Buffer Support 00:30:43.931 ================================ 00:30:43.931 Supported: No 00:30:43.931 00:30:43.931 Persistent Memory Region Support 00:30:43.931 ================================ 00:30:43.931 Supported: No 00:30:43.931 00:30:43.931 Admin Command Set Attributes 00:30:43.931 ============================ 00:30:43.931 Security Send/Receive: Not Supported 00:30:43.931 Format NVM: Not Supported 00:30:43.931 Firmware Activate/Download: Not Supported 00:30:43.931 Namespace Management: Not Supported 00:30:43.931 Device Self-Test: Not Supported 00:30:43.931 Directives: Not Supported 00:30:43.931 NVMe-MI: Not Supported 00:30:43.931 Virtualization Management: Not Supported 00:30:43.931 Doorbell Buffer Config: Not Supported 00:30:43.931 Get LBA Status Capability: Not Supported 00:30:43.931 Command & Feature Lockdown Capability: Not Supported 00:30:43.931 Abort Command Limit: 1 00:30:43.931 Async Event Request Limit: 4 00:30:43.931 Number of Firmware Slots: N/A 00:30:43.931 Firmware Slot 1 Read-Only: N/A 00:30:43.931 Firmware Activation Without Reset: N/A 00:30:43.931 Multiple Update Detection Support: N/A 00:30:43.931 Firmware Update Granularity: No Information Provided 00:30:43.931 Per-Namespace SMART Log: No 00:30:43.931 Asymmetric Namespace Access Log Page: Not Supported 00:30:43.931 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:43.931 Command Effects Log Page: Not Supported 00:30:43.931 Get Log Page Extended Data: Supported 00:30:43.931 Telemetry Log Pages: Not Supported 00:30:43.931 Persistent Event Log Pages: Not Supported 00:30:43.931 Supported Log Pages Log Page: May Support 00:30:43.931 Commands Supported & Effects Log Page: Not Supported 00:30:43.931 Feature Identifiers & Effects Log Page:May Support 00:30:43.931 NVMe-MI Commands & Effects Log Page: May Support 00:30:43.931 Data Area 4 for Telemetry Log: Not Supported 00:30:43.931 Error Log Page Entries Supported: 128 00:30:43.931 Keep Alive: Not Supported 00:30:43.931 00:30:43.931 NVM Command Set Attributes 00:30:43.931 ========================== 00:30:43.931 Submission Queue Entry Size 00:30:43.931 Max: 1 00:30:43.931 Min: 1 00:30:43.931 Completion Queue Entry Size 00:30:43.931 Max: 1 00:30:43.931 Min: 1 00:30:43.931 Number of Namespaces: 0 00:30:43.931 Compare Command: Not Supported 00:30:43.931 Write Uncorrectable Command: Not Supported 00:30:43.931 Dataset Management Command: Not Supported 00:30:43.931 Write Zeroes Command: Not Supported 00:30:43.931 Set Features Save Field: Not Supported 00:30:43.931 Reservations: Not Supported 00:30:43.931 Timestamp: Not Supported 00:30:43.931 Copy: Not Supported 00:30:43.931 Volatile Write Cache: Not Present 00:30:43.931 Atomic Write Unit (Normal): 1 00:30:43.931 Atomic Write Unit (PFail): 1 00:30:43.931 Atomic Compare & Write Unit: 1 00:30:43.931 Fused Compare & Write: Supported 00:30:43.931 Scatter-Gather List 00:30:43.931 SGL Command Set: Supported 00:30:43.931 SGL Keyed: Supported 00:30:43.931 SGL Bit Bucket Descriptor: Not Supported 00:30:43.931 SGL Metadata Pointer: Not Supported 00:30:43.931 Oversized SGL: Not Supported 00:30:43.931 SGL Metadata Address: Not Supported 00:30:43.931 SGL Offset: Supported 00:30:43.931 Transport SGL Data Block: Not Supported 00:30:43.931 Replay Protected Memory Block: Not Supported 00:30:43.931 00:30:43.931 Firmware Slot Information 00:30:43.931 ========================= 00:30:43.931 Active slot: 0 00:30:43.931 00:30:43.931 00:30:43.931 Error Log 00:30:43.931 ========= 00:30:43.931 00:30:43.931 Active Namespaces 00:30:43.931 ================= 00:30:43.931 Discovery Log Page 00:30:43.931 ================== 00:30:43.931 Generation Counter: 2 00:30:43.931 Number of Records: 2 00:30:43.931 Record Format: 0 00:30:43.931 00:30:43.931 Discovery Log Entry 0 00:30:43.931 ---------------------- 00:30:43.931 Transport Type: 3 (TCP) 00:30:43.931 Address Family: 1 (IPv4) 00:30:43.931 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:43.931 Entry Flags: 00:30:43.931 Duplicate Returned Information: 1 00:30:43.931 Explicit Persistent Connection Support for Discovery: 1 00:30:43.931 Transport Requirements: 00:30:43.931 Secure Channel: Not Required 00:30:43.931 Port ID: 0 (0x0000) 00:30:43.931 Controller ID: 65535 (0xffff) 00:30:43.931 Admin Max SQ Size: 128 00:30:43.931 Transport Service Identifier: 4420 00:30:43.931 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:43.931 Transport Address: 10.0.0.2 00:30:43.931 Discovery Log Entry 1 00:30:43.931 ---------------------- 00:30:43.931 Transport Type: 3 (TCP) 00:30:43.931 Address Family: 1 (IPv4) 00:30:43.931 Subsystem Type: 2 (NVM Subsystem) 00:30:43.931 Entry Flags: 00:30:43.931 Duplicate Returned Information: 0 00:30:43.931 Explicit Persistent Connection Support for Discovery: 0 00:30:43.931 Transport Requirements: 00:30:43.931 Secure Channel: Not Required 00:30:43.931 Port ID: 0 (0x0000) 00:30:43.931 Controller ID: 65535 (0xffff) 00:30:43.931 Admin Max SQ Size: 128 00:30:43.931 Transport Service Identifier: 4420 00:30:43.931 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:43.931 Transport Address: 10.0.0.2 [2024-07-13 13:42:18.590157] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:30:43.931 [2024-07-13 13:42:18.590190] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:43.931 [2024-07-13 13:42:18.590212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.931 [2024-07-13 13:42:18.590227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:43.931 [2024-07-13 13:42:18.590240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.931 [2024-07-13 13:42:18.590252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:43.931 [2024-07-13 13:42:18.590265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.931 [2024-07-13 13:42:18.590277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:43.931 [2024-07-13 13:42:18.590289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.931 [2024-07-13 13:42:18.590315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.931 [2024-07-13 13:42:18.590331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.931 [2024-07-13 13:42:18.590347] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:43.932 [2024-07-13 13:42:18.590368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.932 [2024-07-13 13:42:18.590405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:43.932 [2024-07-13 13:42:18.590586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.932 [2024-07-13 13:42:18.590606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.932 [2024-07-13 13:42:18.590618] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.590630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:43.932 [2024-07-13 13:42:18.590651] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.590665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.590682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:43.932 [2024-07-13 13:42:18.590705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.932 [2024-07-13 13:42:18.590744] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:43.932 [2024-07-13 13:42:18.590993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.932 [2024-07-13 13:42:18.591016] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.932 [2024-07-13 13:42:18.591028] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.591039] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:43.932 [2024-07-13 13:42:18.591053] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:30:43.932 [2024-07-13 13:42:18.591067] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:30:43.932 [2024-07-13 13:42:18.591093] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.591109] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.591121] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:43.932 [2024-07-13 13:42:18.591141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.932 [2024-07-13 13:42:18.591200] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:43.932 [2024-07-13 13:42:18.591380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.932 [2024-07-13 13:42:18.591400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.932 [2024-07-13 13:42:18.591411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.591422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:43.932 [2024-07-13 13:42:18.591449] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.591463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.591474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:43.932 [2024-07-13 13:42:18.591491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.932 [2024-07-13 13:42:18.591520] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:43.932 [2024-07-13 13:42:18.591665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.932 [2024-07-13 13:42:18.591686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.932 [2024-07-13 13:42:18.591697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.591707] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:43.932 [2024-07-13 13:42:18.591733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.591747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.591758] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:43.932 [2024-07-13 13:42:18.591775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.932 [2024-07-13 13:42:18.591803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:43.932 [2024-07-13 13:42:18.591982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.932 [2024-07-13 13:42:18.592004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.932 [2024-07-13 13:42:18.592015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.592026] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:43.932 [2024-07-13 13:42:18.592052] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.592068] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.592078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:43.932 [2024-07-13 13:42:18.592096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.932 [2024-07-13 13:42:18.592126] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:43.932 [2024-07-13 13:42:18.592306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.932 [2024-07-13 13:42:18.592325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.932 [2024-07-13 13:42:18.592336] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.592346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:43.932 [2024-07-13 13:42:18.592371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.592386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.592396] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:43.932 [2024-07-13 13:42:18.592413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.932 [2024-07-13 13:42:18.592445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:43.932 [2024-07-13 13:42:18.592674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.932 [2024-07-13 13:42:18.592701] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.932 [2024-07-13 13:42:18.592713] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.592723] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:43.932 [2024-07-13 13:42:18.592749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.592764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.592774] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:43.932 [2024-07-13 13:42:18.592791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.932 [2024-07-13 13:42:18.592819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:43.932 [2024-07-13 13:42:18.592990] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.932 [2024-07-13 13:42:18.593013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.932 [2024-07-13 13:42:18.593024] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.593035] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:43.932 [2024-07-13 13:42:18.593061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.593077] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.593088] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:43.932 [2024-07-13 13:42:18.593105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.932 [2024-07-13 13:42:18.593135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:43.932 [2024-07-13 13:42:18.593317] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.932 [2024-07-13 13:42:18.593338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.932 [2024-07-13 13:42:18.593348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.593359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:43.932 [2024-07-13 13:42:18.593384] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.593399] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.932 [2024-07-13 13:42:18.593409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:43.932 [2024-07-13 13:42:18.593431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.933 [2024-07-13 13:42:18.593461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:43.933 [2024-07-13 13:42:18.593646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.933 [2024-07-13 13:42:18.593666] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.933 [2024-07-13 13:42:18.593677] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.933 [2024-07-13 13:42:18.593687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:43.933 [2024-07-13 13:42:18.593713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.933 [2024-07-13 13:42:18.593728] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.933 [2024-07-13 13:42:18.593738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:43.933 [2024-07-13 13:42:18.593755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.933 [2024-07-13 13:42:18.593788] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:43.933 [2024-07-13 13:42:18.593969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.933 [2024-07-13 13:42:18.593990] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.933 [2024-07-13 13:42:18.594002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.933 [2024-07-13 13:42:18.594013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:43.933 [2024-07-13 13:42:18.594039] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.933 [2024-07-13 13:42:18.594054] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.933 [2024-07-13 13:42:18.594064] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:43.933 [2024-07-13 13:42:18.594082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.933 [2024-07-13 13:42:18.594126] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:43.933 [2024-07-13 13:42:18.594289] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.933 [2024-07-13 13:42:18.594310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.933 [2024-07-13 13:42:18.594321] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.933 [2024-07-13 13:42:18.594332] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:43.933 [2024-07-13 13:42:18.594357] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.933 [2024-07-13 13:42:18.594371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.933 [2024-07-13 13:42:18.594382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:43.933 [2024-07-13 13:42:18.594400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.933 [2024-07-13 13:42:18.594428] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:43.933 [2024-07-13 13:42:18.594606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.933 [2024-07-13 13:42:18.594625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.933 [2024-07-13 13:42:18.594636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.933 [2024-07-13 13:42:18.594646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:43.933 [2024-07-13 13:42:18.594671] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.933 [2024-07-13 13:42:18.594686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.933 [2024-07-13 13:42:18.594696] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:43.933 [2024-07-13 13:42:18.594714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.933 [2024-07-13 13:42:18.594742] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:43.933 [2024-07-13 13:42:18.598885] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.933 [2024-07-13 13:42:18.598910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.933 [2024-07-13 13:42:18.598922] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.933 [2024-07-13 13:42:18.598933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:43.933 [2024-07-13 13:42:18.598961] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.933 [2024-07-13 13:42:18.598977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.933 [2024-07-13 13:42:18.598988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:43.933 [2024-07-13 13:42:18.599006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.933 [2024-07-13 13:42:18.599042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:43.933 [2024-07-13 13:42:18.599204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.933 [2024-07-13 13:42:18.599230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.933 [2024-07-13 13:42:18.599242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.933 [2024-07-13 13:42:18.599253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:43.933 [2024-07-13 13:42:18.599275] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:30:43.933 00:30:43.933 13:42:18 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:44.193 [2024-07-13 13:42:18.701002] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:44.193 [2024-07-13 13:42:18.701099] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391380 ] 00:30:44.193 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.193 [2024-07-13 13:42:18.761151] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:30:44.193 [2024-07-13 13:42:18.761284] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:44.193 [2024-07-13 13:42:18.761309] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:44.193 [2024-07-13 13:42:18.761342] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:44.193 [2024-07-13 13:42:18.761364] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:44.193 [2024-07-13 13:42:18.761718] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:30:44.193 [2024-07-13 13:42:18.761787] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:44.193 [2024-07-13 13:42:18.772548] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:44.193 [2024-07-13 13:42:18.772592] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:44.193 [2024-07-13 13:42:18.772607] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:44.193 [2024-07-13 13:42:18.772617] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:44.193 [2024-07-13 13:42:18.772687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.193 [2024-07-13 13:42:18.772707] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.193 [2024-07-13 13:42:18.772725] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.193 [2024-07-13 13:42:18.772754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:44.193 [2024-07-13 13:42:18.772791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.193 [2024-07-13 13:42:18.779895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.193 [2024-07-13 13:42:18.779922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.193 [2024-07-13 13:42:18.779936] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.193 [2024-07-13 13:42:18.779949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.193 [2024-07-13 13:42:18.779974] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:44.193 [2024-07-13 13:42:18.780002] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:30:44.193 [2024-07-13 13:42:18.780024] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:30:44.193 [2024-07-13 13:42:18.780054] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.193 [2024-07-13 13:42:18.780069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.193 [2024-07-13 13:42:18.780085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.193 [2024-07-13 13:42:18.780106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.193 [2024-07-13 13:42:18.780145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.193 [2024-07-13 13:42:18.780349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.193 [2024-07-13 13:42:18.780371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.193 [2024-07-13 13:42:18.780383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.193 [2024-07-13 13:42:18.780395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.193 [2024-07-13 13:42:18.780416] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:30:44.193 [2024-07-13 13:42:18.780437] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:30:44.193 [2024-07-13 13:42:18.780477] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.193 [2024-07-13 13:42:18.780492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.193 [2024-07-13 13:42:18.780517] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.193 [2024-07-13 13:42:18.780540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.193 [2024-07-13 13:42:18.780572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.193 [2024-07-13 13:42:18.780840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.193 [2024-07-13 13:42:18.780861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.193 [2024-07-13 13:42:18.780884] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.193 [2024-07-13 13:42:18.780895] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.193 [2024-07-13 13:42:18.780914] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:30:44.193 [2024-07-13 13:42:18.780939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:30:44.193 [2024-07-13 13:42:18.780978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.193 [2024-07-13 13:42:18.780993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.193 [2024-07-13 13:42:18.781004] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.193 [2024-07-13 13:42:18.781022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.193 [2024-07-13 13:42:18.781053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.193 [2024-07-13 13:42:18.781237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.193 [2024-07-13 13:42:18.781266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.193 [2024-07-13 13:42:18.781278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.781289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.194 [2024-07-13 13:42:18.781304] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:44.194 [2024-07-13 13:42:18.781336] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.781352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.781379] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.194 [2024-07-13 13:42:18.781397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.194 [2024-07-13 13:42:18.781448] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.194 [2024-07-13 13:42:18.781669] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.194 [2024-07-13 13:42:18.781690] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.194 [2024-07-13 13:42:18.781701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.781711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.194 [2024-07-13 13:42:18.781725] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:30:44.194 [2024-07-13 13:42:18.781739] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:30:44.194 [2024-07-13 13:42:18.781760] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:44.194 [2024-07-13 13:42:18.781891] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:30:44.194 [2024-07-13 13:42:18.781906] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:44.194 [2024-07-13 13:42:18.781933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.781948] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.781963] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.194 [2024-07-13 13:42:18.781983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.194 [2024-07-13 13:42:18.782016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.194 [2024-07-13 13:42:18.782176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.194 [2024-07-13 13:42:18.782196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.194 [2024-07-13 13:42:18.782207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.782218] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.194 [2024-07-13 13:42:18.782232] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:44.194 [2024-07-13 13:42:18.782266] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.782282] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.782293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.194 [2024-07-13 13:42:18.782328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.194 [2024-07-13 13:42:18.782363] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.194 [2024-07-13 13:42:18.782549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.194 [2024-07-13 13:42:18.782569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.194 [2024-07-13 13:42:18.782580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.782591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.194 [2024-07-13 13:42:18.782609] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:44.194 [2024-07-13 13:42:18.782635] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:30:44.194 [2024-07-13 13:42:18.782658] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:30:44.194 [2024-07-13 13:42:18.782694] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:30:44.194 [2024-07-13 13:42:18.782723] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.782752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.194 [2024-07-13 13:42:18.782775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.194 [2024-07-13 13:42:18.782822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.194 [2024-07-13 13:42:18.783105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.194 [2024-07-13 13:42:18.783126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.194 [2024-07-13 13:42:18.783142] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.783155] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:44.194 [2024-07-13 13:42:18.783184] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.194 [2024-07-13 13:42:18.783196] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.783225] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.783240] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.824096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.194 [2024-07-13 13:42:18.824126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.194 [2024-07-13 13:42:18.824139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.824150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.194 [2024-07-13 13:42:18.824182] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:30:44.194 [2024-07-13 13:42:18.824199] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:30:44.194 [2024-07-13 13:42:18.824216] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:30:44.194 [2024-07-13 13:42:18.824232] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:30:44.194 [2024-07-13 13:42:18.824248] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:30:44.194 [2024-07-13 13:42:18.824263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:30:44.194 [2024-07-13 13:42:18.824286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:30:44.194 [2024-07-13 13:42:18.824311] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.824327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.824339] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.194 [2024-07-13 13:42:18.824382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:44.194 [2024-07-13 13:42:18.824421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.194 [2024-07-13 13:42:18.824607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.194 [2024-07-13 13:42:18.824628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.194 [2024-07-13 13:42:18.824640] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.824651] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.194 [2024-07-13 13:42:18.824670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.824684] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.824695] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:44.194 [2024-07-13 13:42:18.824714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.194 [2024-07-13 13:42:18.824738] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.824766] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.824775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:44.194 [2024-07-13 13:42:18.824791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.194 [2024-07-13 13:42:18.824806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.824816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.824826] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:44.194 [2024-07-13 13:42:18.824841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.194 [2024-07-13 13:42:18.824880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.824893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.824903] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.194 [2024-07-13 13:42:18.824935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.194 [2024-07-13 13:42:18.824950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:44.194 [2024-07-13 13:42:18.824977] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:44.194 [2024-07-13 13:42:18.824998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.825011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.194 [2024-07-13 13:42:18.825030] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.194 [2024-07-13 13:42:18.825069] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:44.194 [2024-07-13 13:42:18.825087] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:44.194 [2024-07-13 13:42:18.825099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:44.194 [2024-07-13 13:42:18.825111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.194 [2024-07-13 13:42:18.825122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.194 [2024-07-13 13:42:18.825321] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.194 [2024-07-13 13:42:18.825341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.194 [2024-07-13 13:42:18.825353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.194 [2024-07-13 13:42:18.825379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.194 [2024-07-13 13:42:18.825397] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:30:44.194 [2024-07-13 13:42:18.825412] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:44.194 [2024-07-13 13:42:18.825433] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:30:44.195 [2024-07-13 13:42:18.825454] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:44.195 [2024-07-13 13:42:18.825472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.825485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.825495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.195 [2024-07-13 13:42:18.825513] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:44.195 [2024-07-13 13:42:18.825542] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.195 [2024-07-13 13:42:18.825810] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.195 [2024-07-13 13:42:18.825830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.195 [2024-07-13 13:42:18.825842] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.825857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.195 [2024-07-13 13:42:18.825975] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:30:44.195 [2024-07-13 13:42:18.826012] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:44.195 [2024-07-13 13:42:18.826044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.826058] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.195 [2024-07-13 13:42:18.826077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.195 [2024-07-13 13:42:18.826108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.195 [2024-07-13 13:42:18.826330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.195 [2024-07-13 13:42:18.826350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.195 [2024-07-13 13:42:18.826361] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.826372] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:44.195 [2024-07-13 13:42:18.826383] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.195 [2024-07-13 13:42:18.826395] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.826442] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.826458] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.870905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.195 [2024-07-13 13:42:18.870932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.195 [2024-07-13 13:42:18.870943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.870954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.195 [2024-07-13 13:42:18.870997] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:30:44.195 [2024-07-13 13:42:18.871032] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:30:44.195 [2024-07-13 13:42:18.871069] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:30:44.195 [2024-07-13 13:42:18.871097] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.871111] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.195 [2024-07-13 13:42:18.871131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.195 [2024-07-13 13:42:18.871170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.195 [2024-07-13 13:42:18.871403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.195 [2024-07-13 13:42:18.871424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.195 [2024-07-13 13:42:18.871435] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.871446] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:44.195 [2024-07-13 13:42:18.871457] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.195 [2024-07-13 13:42:18.871469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.871511] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.871526] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.914898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.195 [2024-07-13 13:42:18.914925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.195 [2024-07-13 13:42:18.914937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.914948] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.195 [2024-07-13 13:42:18.914988] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:44.195 [2024-07-13 13:42:18.915018] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:44.195 [2024-07-13 13:42:18.915049] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.915064] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.195 [2024-07-13 13:42:18.915084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.195 [2024-07-13 13:42:18.915133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.195 [2024-07-13 13:42:18.915339] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.195 [2024-07-13 13:42:18.915361] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.195 [2024-07-13 13:42:18.915373] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.915383] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:44.195 [2024-07-13 13:42:18.915395] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.195 [2024-07-13 13:42:18.915406] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.915433] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.195 [2024-07-13 13:42:18.915460] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.957069] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.456 [2024-07-13 13:42:18.957100] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.456 [2024-07-13 13:42:18.957118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.957131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.456 [2024-07-13 13:42:18.957159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:44.456 [2024-07-13 13:42:18.957184] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:30:44.456 [2024-07-13 13:42:18.957210] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:30:44.456 [2024-07-13 13:42:18.957227] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:44.456 [2024-07-13 13:42:18.957241] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:44.456 [2024-07-13 13:42:18.957254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:30:44.456 [2024-07-13 13:42:18.957273] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:30:44.456 [2024-07-13 13:42:18.957286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:30:44.456 [2024-07-13 13:42:18.957299] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:30:44.456 [2024-07-13 13:42:18.957365] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.957383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.456 [2024-07-13 13:42:18.957427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.456 [2024-07-13 13:42:18.957453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.957466] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.957477] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.456 [2024-07-13 13:42:18.957493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:44.456 [2024-07-13 13:42:18.957525] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.456 [2024-07-13 13:42:18.957564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.456 [2024-07-13 13:42:18.957759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.456 [2024-07-13 13:42:18.957780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.456 [2024-07-13 13:42:18.957792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.957804] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.456 [2024-07-13 13:42:18.957827] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.456 [2024-07-13 13:42:18.957859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.456 [2024-07-13 13:42:18.957876] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.957887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.456 [2024-07-13 13:42:18.957928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.957944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.456 [2024-07-13 13:42:18.957962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.456 [2024-07-13 13:42:18.957997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.456 [2024-07-13 13:42:18.958158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.456 [2024-07-13 13:42:18.958180] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.456 [2024-07-13 13:42:18.958191] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.958202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.456 [2024-07-13 13:42:18.958227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.958242] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.456 [2024-07-13 13:42:18.958260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.456 [2024-07-13 13:42:18.958305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.456 [2024-07-13 13:42:18.958521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.456 [2024-07-13 13:42:18.958541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.456 [2024-07-13 13:42:18.958552] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.958563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.456 [2024-07-13 13:42:18.958587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.958617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.456 [2024-07-13 13:42:18.958635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.456 [2024-07-13 13:42:18.958678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.456 [2024-07-13 13:42:18.958907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.456 [2024-07-13 13:42:18.958928] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.456 [2024-07-13 13:42:18.958939] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.958950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.456 [2024-07-13 13:42:18.958993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.959011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:44.456 [2024-07-13 13:42:18.959044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.456 [2024-07-13 13:42:18.959066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.959080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:44.456 [2024-07-13 13:42:18.959097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.456 [2024-07-13 13:42:18.959117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.959136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:30:44.456 [2024-07-13 13:42:18.959153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.456 [2024-07-13 13:42:18.959192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.959205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:44.456 [2024-07-13 13:42:18.959226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.456 [2024-07-13 13:42:18.959261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:44.456 [2024-07-13 13:42:18.959295] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:44.456 [2024-07-13 13:42:18.959307] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:30:44.456 [2024-07-13 13:42:18.959318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:44.456 [2024-07-13 13:42:18.959709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.456 [2024-07-13 13:42:18.959731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.456 [2024-07-13 13:42:18.959742] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.959753] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:30:44.456 [2024-07-13 13:42:18.959780] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:30:44.456 [2024-07-13 13:42:18.959792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.959830] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.959847] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.959875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.456 [2024-07-13 13:42:18.959894] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.456 [2024-07-13 13:42:18.959905] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.959915] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:30:44.456 [2024-07-13 13:42:18.959926] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:44.456 [2024-07-13 13:42:18.959937] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.959953] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.959964] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.959985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.456 [2024-07-13 13:42:18.960000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.456 [2024-07-13 13:42:18.960011] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.960021] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:30:44.456 [2024-07-13 13:42:18.960032] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:44.456 [2024-07-13 13:42:18.960043] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.960057] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.960069] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.960081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:44.456 [2024-07-13 13:42:18.960095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:44.456 [2024-07-13 13:42:18.960121] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.960131] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:30:44.456 [2024-07-13 13:42:18.960142] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:44.456 [2024-07-13 13:42:18.960152] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.960167] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.960194] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:44.456 [2024-07-13 13:42:18.960210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.456 [2024-07-13 13:42:18.960229] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.456 [2024-07-13 13:42:18.960239] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.457 [2024-07-13 13:42:18.960250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:44.457 [2024-07-13 13:42:18.960283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.457 [2024-07-13 13:42:18.960300] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.457 [2024-07-13 13:42:18.960310] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.457 [2024-07-13 13:42:18.960319] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:44.457 [2024-07-13 13:42:18.960341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.457 [2024-07-13 13:42:18.960357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.457 [2024-07-13 13:42:18.960367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.457 [2024-07-13 13:42:18.960376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:30:44.457 [2024-07-13 13:42:18.960399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.457 [2024-07-13 13:42:18.960415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.457 [2024-07-13 13:42:18.960425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.457 [2024-07-13 13:42:18.960435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:44.457 ===================================================== 00:30:44.457 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:44.457 ===================================================== 00:30:44.457 Controller Capabilities/Features 00:30:44.457 ================================ 00:30:44.457 Vendor ID: 8086 00:30:44.457 Subsystem Vendor ID: 8086 00:30:44.457 Serial Number: SPDK00000000000001 00:30:44.457 Model Number: SPDK bdev Controller 00:30:44.457 Firmware Version: 24.09 00:30:44.457 Recommended Arb Burst: 6 00:30:44.457 IEEE OUI Identifier: e4 d2 5c 00:30:44.457 Multi-path I/O 00:30:44.457 May have multiple subsystem ports: Yes 00:30:44.457 May have multiple controllers: Yes 00:30:44.457 Associated with SR-IOV VF: No 00:30:44.457 Max Data Transfer Size: 131072 00:30:44.457 Max Number of Namespaces: 32 00:30:44.457 Max Number of I/O Queues: 127 00:30:44.457 NVMe Specification Version (VS): 1.3 00:30:44.457 NVMe Specification Version (Identify): 1.3 00:30:44.457 Maximum Queue Entries: 128 00:30:44.457 Contiguous Queues Required: Yes 00:30:44.457 Arbitration Mechanisms Supported 00:30:44.457 Weighted Round Robin: Not Supported 00:30:44.457 Vendor Specific: Not Supported 00:30:44.457 Reset Timeout: 15000 ms 00:30:44.457 Doorbell Stride: 4 bytes 00:30:44.457 NVM Subsystem Reset: Not Supported 00:30:44.457 Command Sets Supported 00:30:44.457 NVM Command Set: Supported 00:30:44.457 Boot Partition: Not Supported 00:30:44.457 Memory Page Size Minimum: 4096 bytes 00:30:44.457 Memory Page Size Maximum: 4096 bytes 00:30:44.457 Persistent Memory Region: Not Supported 00:30:44.457 Optional Asynchronous Events Supported 00:30:44.457 Namespace Attribute Notices: Supported 00:30:44.457 Firmware Activation Notices: Not Supported 00:30:44.457 ANA Change Notices: Not Supported 00:30:44.457 PLE Aggregate Log Change Notices: Not Supported 00:30:44.457 LBA Status Info Alert Notices: Not Supported 00:30:44.457 EGE Aggregate Log Change Notices: Not Supported 00:30:44.457 Normal NVM Subsystem Shutdown event: Not Supported 00:30:44.457 Zone Descriptor Change Notices: Not Supported 00:30:44.457 Discovery Log Change Notices: Not Supported 00:30:44.457 Controller Attributes 00:30:44.457 128-bit Host Identifier: Supported 00:30:44.457 Non-Operational Permissive Mode: Not Supported 00:30:44.457 NVM Sets: Not Supported 00:30:44.457 Read Recovery Levels: Not Supported 00:30:44.457 Endurance Groups: Not Supported 00:30:44.457 Predictable Latency Mode: Not Supported 00:30:44.457 Traffic Based Keep ALive: Not Supported 00:30:44.457 Namespace Granularity: Not Supported 00:30:44.457 SQ Associations: Not Supported 00:30:44.457 UUID List: Not Supported 00:30:44.457 Multi-Domain Subsystem: Not Supported 00:30:44.457 Fixed Capacity Management: Not Supported 00:30:44.457 Variable Capacity Management: Not Supported 00:30:44.457 Delete Endurance Group: Not Supported 00:30:44.457 Delete NVM Set: Not Supported 00:30:44.457 Extended LBA Formats Supported: Not Supported 00:30:44.457 Flexible Data Placement Supported: Not Supported 00:30:44.457 00:30:44.457 Controller Memory Buffer Support 00:30:44.457 ================================ 00:30:44.457 Supported: No 00:30:44.457 00:30:44.457 Persistent Memory Region Support 00:30:44.457 ================================ 00:30:44.457 Supported: No 00:30:44.457 00:30:44.457 Admin Command Set Attributes 00:30:44.457 ============================ 00:30:44.457 Security Send/Receive: Not Supported 00:30:44.457 Format NVM: Not Supported 00:30:44.457 Firmware Activate/Download: Not Supported 00:30:44.457 Namespace Management: Not Supported 00:30:44.457 Device Self-Test: Not Supported 00:30:44.457 Directives: Not Supported 00:30:44.457 NVMe-MI: Not Supported 00:30:44.457 Virtualization Management: Not Supported 00:30:44.457 Doorbell Buffer Config: Not Supported 00:30:44.457 Get LBA Status Capability: Not Supported 00:30:44.457 Command & Feature Lockdown Capability: Not Supported 00:30:44.457 Abort Command Limit: 4 00:30:44.457 Async Event Request Limit: 4 00:30:44.457 Number of Firmware Slots: N/A 00:30:44.457 Firmware Slot 1 Read-Only: N/A 00:30:44.457 Firmware Activation Without Reset: N/A 00:30:44.457 Multiple Update Detection Support: N/A 00:30:44.457 Firmware Update Granularity: No Information Provided 00:30:44.457 Per-Namespace SMART Log: No 00:30:44.457 Asymmetric Namespace Access Log Page: Not Supported 00:30:44.457 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:44.457 Command Effects Log Page: Supported 00:30:44.457 Get Log Page Extended Data: Supported 00:30:44.457 Telemetry Log Pages: Not Supported 00:30:44.457 Persistent Event Log Pages: Not Supported 00:30:44.457 Supported Log Pages Log Page: May Support 00:30:44.457 Commands Supported & Effects Log Page: Not Supported 00:30:44.457 Feature Identifiers & Effects Log Page:May Support 00:30:44.457 NVMe-MI Commands & Effects Log Page: May Support 00:30:44.457 Data Area 4 for Telemetry Log: Not Supported 00:30:44.457 Error Log Page Entries Supported: 128 00:30:44.457 Keep Alive: Supported 00:30:44.457 Keep Alive Granularity: 10000 ms 00:30:44.457 00:30:44.457 NVM Command Set Attributes 00:30:44.457 ========================== 00:30:44.457 Submission Queue Entry Size 00:30:44.457 Max: 64 00:30:44.457 Min: 64 00:30:44.457 Completion Queue Entry Size 00:30:44.457 Max: 16 00:30:44.457 Min: 16 00:30:44.457 Number of Namespaces: 32 00:30:44.457 Compare Command: Supported 00:30:44.457 Write Uncorrectable Command: Not Supported 00:30:44.457 Dataset Management Command: Supported 00:30:44.457 Write Zeroes Command: Supported 00:30:44.457 Set Features Save Field: Not Supported 00:30:44.457 Reservations: Supported 00:30:44.457 Timestamp: Not Supported 00:30:44.457 Copy: Supported 00:30:44.457 Volatile Write Cache: Present 00:30:44.457 Atomic Write Unit (Normal): 1 00:30:44.457 Atomic Write Unit (PFail): 1 00:30:44.457 Atomic Compare & Write Unit: 1 00:30:44.457 Fused Compare & Write: Supported 00:30:44.457 Scatter-Gather List 00:30:44.457 SGL Command Set: Supported 00:30:44.457 SGL Keyed: Supported 00:30:44.457 SGL Bit Bucket Descriptor: Not Supported 00:30:44.457 SGL Metadata Pointer: Not Supported 00:30:44.457 Oversized SGL: Not Supported 00:30:44.457 SGL Metadata Address: Not Supported 00:30:44.457 SGL Offset: Supported 00:30:44.457 Transport SGL Data Block: Not Supported 00:30:44.457 Replay Protected Memory Block: Not Supported 00:30:44.457 00:30:44.457 Firmware Slot Information 00:30:44.457 ========================= 00:30:44.457 Active slot: 1 00:30:44.457 Slot 1 Firmware Revision: 24.09 00:30:44.457 00:30:44.457 00:30:44.457 Commands Supported and Effects 00:30:44.457 ============================== 00:30:44.457 Admin Commands 00:30:44.457 -------------- 00:30:44.457 Get Log Page (02h): Supported 00:30:44.457 Identify (06h): Supported 00:30:44.457 Abort (08h): Supported 00:30:44.457 Set Features (09h): Supported 00:30:44.457 Get Features (0Ah): Supported 00:30:44.457 Asynchronous Event Request (0Ch): Supported 00:30:44.457 Keep Alive (18h): Supported 00:30:44.457 I/O Commands 00:30:44.457 ------------ 00:30:44.457 Flush (00h): Supported LBA-Change 00:30:44.457 Write (01h): Supported LBA-Change 00:30:44.457 Read (02h): Supported 00:30:44.457 Compare (05h): Supported 00:30:44.457 Write Zeroes (08h): Supported LBA-Change 00:30:44.457 Dataset Management (09h): Supported LBA-Change 00:30:44.457 Copy (19h): Supported LBA-Change 00:30:44.457 00:30:44.457 Error Log 00:30:44.457 ========= 00:30:44.457 00:30:44.457 Arbitration 00:30:44.457 =========== 00:30:44.457 Arbitration Burst: 1 00:30:44.457 00:30:44.457 Power Management 00:30:44.457 ================ 00:30:44.457 Number of Power States: 1 00:30:44.457 Current Power State: Power State #0 00:30:44.457 Power State #0: 00:30:44.457 Max Power: 0.00 W 00:30:44.457 Non-Operational State: Operational 00:30:44.457 Entry Latency: Not Reported 00:30:44.457 Exit Latency: Not Reported 00:30:44.457 Relative Read Throughput: 0 00:30:44.457 Relative Read Latency: 0 00:30:44.457 Relative Write Throughput: 0 00:30:44.457 Relative Write Latency: 0 00:30:44.457 Idle Power: Not Reported 00:30:44.457 Active Power: Not Reported 00:30:44.458 Non-Operational Permissive Mode: Not Supported 00:30:44.458 00:30:44.458 Health Information 00:30:44.458 ================== 00:30:44.458 Critical Warnings: 00:30:44.458 Available Spare Space: OK 00:30:44.458 Temperature: OK 00:30:44.458 Device Reliability: OK 00:30:44.458 Read Only: No 00:30:44.458 Volatile Memory Backup: OK 00:30:44.458 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:44.458 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:44.458 Available Spare: 0% 00:30:44.458 Available Spare Threshold: 0% 00:30:44.458 Life Percentage Used:[2024-07-13 13:42:18.960624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.960642] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:44.458 [2024-07-13 13:42:18.960661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.458 [2024-07-13 13:42:18.960692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:44.458 [2024-07-13 13:42:18.964904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.458 [2024-07-13 13:42:18.964928] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.458 [2024-07-13 13:42:18.964941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.964958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:44.458 [2024-07-13 13:42:18.965038] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:30:44.458 [2024-07-13 13:42:18.965070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:44.458 [2024-07-13 13:42:18.965091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.458 [2024-07-13 13:42:18.965105] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:44.458 [2024-07-13 13:42:18.965119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.458 [2024-07-13 13:42:18.965131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:44.458 [2024-07-13 13:42:18.965143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.458 [2024-07-13 13:42:18.965155] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.458 [2024-07-13 13:42:18.965168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.458 [2024-07-13 13:42:18.965203] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.965217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.965227] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.458 [2024-07-13 13:42:18.965250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.458 [2024-07-13 13:42:18.965285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.458 [2024-07-13 13:42:18.965477] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.458 [2024-07-13 13:42:18.965505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.458 [2024-07-13 13:42:18.965518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.965529] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.458 [2024-07-13 13:42:18.965549] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.965564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.965575] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.458 [2024-07-13 13:42:18.965614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.458 [2024-07-13 13:42:18.965669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.458 [2024-07-13 13:42:18.965931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.458 [2024-07-13 13:42:18.965952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.458 [2024-07-13 13:42:18.965964] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.965974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.458 [2024-07-13 13:42:18.965988] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:30:44.458 [2024-07-13 13:42:18.966007] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:30:44.458 [2024-07-13 13:42:18.966047] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.966064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.966074] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.458 [2024-07-13 13:42:18.966093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.458 [2024-07-13 13:42:18.966122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.458 [2024-07-13 13:42:18.966310] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.458 [2024-07-13 13:42:18.966330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.458 [2024-07-13 13:42:18.966340] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.966351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.458 [2024-07-13 13:42:18.966376] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.966391] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.966402] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.458 [2024-07-13 13:42:18.966419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.458 [2024-07-13 13:42:18.966464] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.458 [2024-07-13 13:42:18.966668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.458 [2024-07-13 13:42:18.966688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.458 [2024-07-13 13:42:18.966699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.966710] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.458 [2024-07-13 13:42:18.966740] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.966756] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.966766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.458 [2024-07-13 13:42:18.966798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.458 [2024-07-13 13:42:18.966828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.458 [2024-07-13 13:42:18.967044] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.458 [2024-07-13 13:42:18.967065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.458 [2024-07-13 13:42:18.967077] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.967087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.458 [2024-07-13 13:42:18.967113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.967128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.967138] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.458 [2024-07-13 13:42:18.967155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.458 [2024-07-13 13:42:18.967185] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.458 [2024-07-13 13:42:18.967441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.458 [2024-07-13 13:42:18.967463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.458 [2024-07-13 13:42:18.967474] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.967485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.458 [2024-07-13 13:42:18.967511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.967526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.967536] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.458 [2024-07-13 13:42:18.967568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.458 [2024-07-13 13:42:18.967598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.458 [2024-07-13 13:42:18.967794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.458 [2024-07-13 13:42:18.967814] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.458 [2024-07-13 13:42:18.967825] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.967835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.458 [2024-07-13 13:42:18.967860] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.967884] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.967895] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.458 [2024-07-13 13:42:18.967912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.458 [2024-07-13 13:42:18.967956] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.458 [2024-07-13 13:42:18.968175] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.458 [2024-07-13 13:42:18.968195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.458 [2024-07-13 13:42:18.968206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.968217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.458 [2024-07-13 13:42:18.968246] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.968262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.968272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.458 [2024-07-13 13:42:18.968305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.458 [2024-07-13 13:42:18.968334] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.458 [2024-07-13 13:42:18.968536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.458 [2024-07-13 13:42:18.968558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.458 [2024-07-13 13:42:18.968569] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.968579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.458 [2024-07-13 13:42:18.968605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.968620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.458 [2024-07-13 13:42:18.968630] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.458 [2024-07-13 13:42:18.968670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.459 [2024-07-13 13:42:18.968715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.459 [2024-07-13 13:42:18.972905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.459 [2024-07-13 13:42:18.972929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.459 [2024-07-13 13:42:18.972940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.459 [2024-07-13 13:42:18.972950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.459 [2024-07-13 13:42:18.972976] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:44.459 [2024-07-13 13:42:18.972991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:44.459 [2024-07-13 13:42:18.973001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:44.459 [2024-07-13 13:42:18.973018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.459 [2024-07-13 13:42:18.973048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:44.459 [2024-07-13 13:42:18.973240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:44.459 [2024-07-13 13:42:18.973261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:44.459 [2024-07-13 13:42:18.973273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:44.459 [2024-07-13 13:42:18.973283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:44.459 [2024-07-13 13:42:18.973310] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:30:44.459 0% 00:30:44.459 Data Units Read: 0 00:30:44.459 Data Units Written: 0 00:30:44.459 Host Read Commands: 0 00:30:44.459 Host Write Commands: 0 00:30:44.459 Controller Busy Time: 0 minutes 00:30:44.459 Power Cycles: 0 00:30:44.459 Power On Hours: 0 hours 00:30:44.459 Unsafe Shutdowns: 0 00:30:44.459 Unrecoverable Media Errors: 0 00:30:44.459 Lifetime Error Log Entries: 0 00:30:44.459 Warning Temperature Time: 0 minutes 00:30:44.459 Critical Temperature Time: 0 minutes 00:30:44.459 00:30:44.459 Number of Queues 00:30:44.459 ================ 00:30:44.459 Number of I/O Submission Queues: 127 00:30:44.459 Number of I/O Completion Queues: 127 00:30:44.459 00:30:44.459 Active Namespaces 00:30:44.459 ================= 00:30:44.459 Namespace ID:1 00:30:44.459 Error Recovery Timeout: Unlimited 00:30:44.459 Command Set Identifier: NVM (00h) 00:30:44.459 Deallocate: Supported 00:30:44.459 Deallocated/Unwritten Error: Not Supported 00:30:44.459 Deallocated Read Value: Unknown 00:30:44.459 Deallocate in Write Zeroes: Not Supported 00:30:44.459 Deallocated Guard Field: 0xFFFF 00:30:44.459 Flush: Supported 00:30:44.459 Reservation: Supported 00:30:44.459 Namespace Sharing Capabilities: Multiple Controllers 00:30:44.459 Size (in LBAs): 131072 (0GiB) 00:30:44.459 Capacity (in LBAs): 131072 (0GiB) 00:30:44.459 Utilization (in LBAs): 131072 (0GiB) 00:30:44.459 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:44.459 EUI64: ABCDEF0123456789 00:30:44.459 UUID: 8ae83cb1-b372-499f-b4d2-db59cf632517 00:30:44.459 Thin Provisioning: Not Supported 00:30:44.459 Per-NS Atomic Units: Yes 00:30:44.459 Atomic Boundary Size (Normal): 0 00:30:44.459 Atomic Boundary Size (PFail): 0 00:30:44.459 Atomic Boundary Offset: 0 00:30:44.459 Maximum Single Source Range Length: 65535 00:30:44.459 Maximum Copy Length: 65535 00:30:44.459 Maximum Source Range Count: 1 00:30:44.459 NGUID/EUI64 Never Reused: No 00:30:44.459 Namespace Write Protected: No 00:30:44.459 Number of LBA Formats: 1 00:30:44.459 Current LBA Format: LBA Format #00 00:30:44.459 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:44.459 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:44.459 rmmod nvme_tcp 00:30:44.459 rmmod nvme_fabrics 00:30:44.459 rmmod nvme_keyring 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 391221 ']' 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 391221 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 391221 ']' 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 391221 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 391221 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 391221' 00:30:44.459 killing process with pid 391221 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 391221 00:30:44.459 13:42:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 391221 00:30:46.359 13:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:46.359 13:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:46.359 13:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:46.359 13:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:46.359 13:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:46.359 13:42:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.359 13:42:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:46.359 13:42:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.257 13:42:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:48.257 00:30:48.257 real 0m7.633s 00:30:48.257 user 0m11.095s 00:30:48.257 sys 0m2.157s 00:30:48.257 13:42:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:48.257 13:42:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:48.257 ************************************ 00:30:48.257 END TEST nvmf_identify 00:30:48.257 ************************************ 00:30:48.257 13:42:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:48.257 13:42:22 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:48.257 13:42:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:48.257 13:42:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:48.257 13:42:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:48.257 ************************************ 00:30:48.257 START TEST nvmf_perf 00:30:48.257 ************************************ 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:48.257 * Looking for test storage... 00:30:48.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:48.257 13:42:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:50.152 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.152 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:50.152 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:50.152 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:50.152 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:50.152 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:50.152 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:50.152 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:30:50.152 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:50.152 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:30:50.152 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:50.153 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:50.153 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:50.153 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:50.153 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:50.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:30:50.153 00:30:50.153 --- 10.0.0.2 ping statistics --- 00:30:50.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.153 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:30:50.153 00:30:50.153 --- 10.0.0.1 ping statistics --- 00:30:50.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.153 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=393455 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 393455 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 393455 ']' 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:50.153 13:42:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:50.153 [2024-07-13 13:42:24.861500] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:50.153 [2024-07-13 13:42:24.861642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.411 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.411 [2024-07-13 13:42:25.004627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:50.669 [2024-07-13 13:42:25.264242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.669 [2024-07-13 13:42:25.264327] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.669 [2024-07-13 13:42:25.264355] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.669 [2024-07-13 13:42:25.264375] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.669 [2024-07-13 13:42:25.264396] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.669 [2024-07-13 13:42:25.264515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.669 [2024-07-13 13:42:25.264583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.669 [2024-07-13 13:42:25.264667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.669 [2024-07-13 13:42:25.264677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:51.234 13:42:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:51.234 13:42:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:30:51.234 13:42:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:51.234 13:42:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:51.234 13:42:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:51.234 13:42:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.234 13:42:25 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:51.234 13:42:25 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:54.542 13:42:28 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:54.542 13:42:28 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:54.542 13:42:29 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:54.542 13:42:29 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:55.105 13:42:29 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:55.105 13:42:29 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:55.105 13:42:29 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:55.105 13:42:29 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:55.105 13:42:29 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:55.105 [2024-07-13 13:42:29.778798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.105 13:42:29 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:55.669 13:42:30 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:55.669 13:42:30 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:55.669 13:42:30 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:55.669 13:42:30 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:56.234 13:42:30 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:56.234 [2024-07-13 13:42:30.913003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:56.234 13:42:30 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:56.491 13:42:31 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:56.491 13:42:31 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:56.491 13:42:31 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:56.491 13:42:31 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:57.865 Initializing NVMe Controllers 00:30:57.865 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:57.865 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:57.865 Initialization complete. Launching workers. 00:30:57.865 ======================================================== 00:30:57.865 Latency(us) 00:30:57.865 Device Information : IOPS MiB/s Average min max 00:30:57.865 PCIE (0000:88:00.0) NSID 1 from core 0: 73175.17 285.84 436.84 49.76 4445.63 00:30:57.865 ======================================================== 00:30:57.865 Total : 73175.17 285.84 436.84 49.76 4445.63 00:30:57.865 00:30:58.124 13:42:32 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:58.124 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.495 Initializing NVMe Controllers 00:30:59.495 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:59.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:59.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:59.495 Initialization complete. Launching workers. 00:30:59.495 ======================================================== 00:30:59.495 Latency(us) 00:30:59.495 Device Information : IOPS MiB/s Average min max 00:30:59.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.70 0.32 12417.28 243.00 45355.31 00:30:59.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 52.81 0.21 19236.49 7924.12 47911.27 00:30:59.495 ======================================================== 00:30:59.495 Total : 133.51 0.52 15114.43 243.00 47911.27 00:30:59.495 00:30:59.495 13:42:34 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:59.495 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.869 Initializing NVMe Controllers 00:31:00.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:00.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:00.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:00.869 Initialization complete. Launching workers. 00:31:00.869 ======================================================== 00:31:00.869 Latency(us) 00:31:00.869 Device Information : IOPS MiB/s Average min max 00:31:00.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4884.93 19.08 6577.19 1232.25 12315.99 00:31:00.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3866.95 15.11 8307.15 6154.54 16052.17 00:31:00.869 ======================================================== 00:31:00.869 Total : 8751.88 34.19 7341.56 1232.25 16052.17 00:31:00.869 00:31:00.869 13:42:35 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:00.869 13:42:35 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:00.869 13:42:35 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:00.869 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.399 Initializing NVMe Controllers 00:31:03.399 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:03.399 Controller IO queue size 128, less than required. 00:31:03.399 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:03.399 Controller IO queue size 128, less than required. 00:31:03.399 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:03.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:03.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:03.399 Initialization complete. Launching workers. 00:31:03.399 ======================================================== 00:31:03.399 Latency(us) 00:31:03.399 Device Information : IOPS MiB/s Average min max 00:31:03.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 995.92 248.98 137859.55 94634.43 419512.87 00:31:03.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 535.46 133.86 244793.89 117731.47 423371.46 00:31:03.399 ======================================================== 00:31:03.399 Total : 1531.38 382.84 175249.91 94634.43 423371.46 00:31:03.399 00:31:03.656 13:42:38 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:03.656 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.914 No valid NVMe controllers or AIO or URING devices found 00:31:03.914 Initializing NVMe Controllers 00:31:03.914 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:03.914 Controller IO queue size 128, less than required. 00:31:03.914 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:03.914 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:03.914 Controller IO queue size 128, less than required. 00:31:03.914 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:03.914 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:03.914 WARNING: Some requested NVMe devices were skipped 00:31:03.914 13:42:38 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:03.914 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.195 Initializing NVMe Controllers 00:31:07.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:07.195 Controller IO queue size 128, less than required. 00:31:07.195 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:07.195 Controller IO queue size 128, less than required. 00:31:07.195 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:07.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:07.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:07.195 Initialization complete. Launching workers. 00:31:07.195 00:31:07.195 ==================== 00:31:07.195 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:07.195 TCP transport: 00:31:07.195 polls: 12948 00:31:07.195 idle_polls: 4257 00:31:07.195 sock_completions: 8691 00:31:07.195 nvme_completions: 3993 00:31:07.195 submitted_requests: 5982 00:31:07.195 queued_requests: 1 00:31:07.195 00:31:07.195 ==================== 00:31:07.195 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:07.196 TCP transport: 00:31:07.196 polls: 13034 00:31:07.196 idle_polls: 4627 00:31:07.196 sock_completions: 8407 00:31:07.196 nvme_completions: 4103 00:31:07.196 submitted_requests: 6144 00:31:07.196 queued_requests: 1 00:31:07.196 ======================================================== 00:31:07.196 Latency(us) 00:31:07.196 Device Information : IOPS MiB/s Average min max 00:31:07.196 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 996.92 249.23 139713.07 74559.92 435548.01 00:31:07.196 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1024.40 256.10 125782.85 72147.27 313277.90 00:31:07.196 ======================================================== 00:31:07.196 Total : 2021.32 505.33 132653.30 72147.27 435548.01 00:31:07.196 00:31:07.196 13:42:41 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:07.196 13:42:41 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.196 13:42:41 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:07.196 13:42:41 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:31:07.196 13:42:41 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:10.473 13:42:45 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=b9674827-64e2-49c9-82da-33c7c9c3101b 00:31:10.473 13:42:45 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb b9674827-64e2-49c9-82da-33c7c9c3101b 00:31:10.473 13:42:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=b9674827-64e2-49c9-82da-33c7c9c3101b 00:31:10.473 13:42:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:10.473 13:42:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:10.473 13:42:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:10.473 13:42:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:10.731 13:42:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:10.731 { 00:31:10.731 "uuid": "b9674827-64e2-49c9-82da-33c7c9c3101b", 00:31:10.731 "name": "lvs_0", 00:31:10.731 "base_bdev": "Nvme0n1", 00:31:10.731 "total_data_clusters": 238234, 00:31:10.731 "free_clusters": 238234, 00:31:10.731 "block_size": 512, 00:31:10.731 "cluster_size": 4194304 00:31:10.731 } 00:31:10.731 ]' 00:31:10.731 13:42:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="b9674827-64e2-49c9-82da-33c7c9c3101b") .free_clusters' 00:31:10.731 13:42:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:31:10.731 13:42:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="b9674827-64e2-49c9-82da-33c7c9c3101b") .cluster_size' 00:31:10.988 13:42:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:10.988 13:42:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:31:10.988 13:42:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:31:10.988 952936 00:31:10.988 13:42:45 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:10.988 13:42:45 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:10.989 13:42:45 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b9674827-64e2-49c9-82da-33c7c9c3101b lbd_0 20480 00:31:11.553 13:42:46 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=294ae45c-96c0-427c-91ee-11d41fb1a64e 00:31:11.553 13:42:46 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 294ae45c-96c0-427c-91ee-11d41fb1a64e lvs_n_0 00:31:12.486 13:42:46 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=bc298efa-9245-4b15-b742-e07fd773accc 00:31:12.486 13:42:46 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb bc298efa-9245-4b15-b742-e07fd773accc 00:31:12.486 13:42:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=bc298efa-9245-4b15-b742-e07fd773accc 00:31:12.486 13:42:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:12.486 13:42:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:12.486 13:42:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:12.486 13:42:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:12.486 13:42:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:12.486 { 00:31:12.486 "uuid": "b9674827-64e2-49c9-82da-33c7c9c3101b", 00:31:12.486 "name": "lvs_0", 00:31:12.486 "base_bdev": "Nvme0n1", 00:31:12.486 "total_data_clusters": 238234, 00:31:12.486 "free_clusters": 233114, 00:31:12.486 "block_size": 512, 00:31:12.486 "cluster_size": 4194304 00:31:12.486 }, 00:31:12.486 { 00:31:12.486 "uuid": "bc298efa-9245-4b15-b742-e07fd773accc", 00:31:12.486 "name": "lvs_n_0", 00:31:12.486 "base_bdev": "294ae45c-96c0-427c-91ee-11d41fb1a64e", 00:31:12.486 "total_data_clusters": 5114, 00:31:12.486 "free_clusters": 5114, 00:31:12.486 "block_size": 512, 00:31:12.486 "cluster_size": 4194304 00:31:12.486 } 00:31:12.486 ]' 00:31:12.486 13:42:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="bc298efa-9245-4b15-b742-e07fd773accc") .free_clusters' 00:31:12.743 13:42:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:31:12.743 13:42:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="bc298efa-9245-4b15-b742-e07fd773accc") .cluster_size' 00:31:12.743 13:42:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:12.743 13:42:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:31:12.743 13:42:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:31:12.743 20456 00:31:12.743 13:42:47 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:12.743 13:42:47 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bc298efa-9245-4b15-b742-e07fd773accc lbd_nest_0 20456 00:31:13.001 13:42:47 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=2e80be4e-1471-420f-b504-b1e787fb27ae 00:31:13.001 13:42:47 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:13.258 13:42:47 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:13.258 13:42:47 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 2e80be4e-1471-420f-b504-b1e787fb27ae 00:31:13.516 13:42:48 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:13.797 13:42:48 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:13.797 13:42:48 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:13.797 13:42:48 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:13.797 13:42:48 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:13.797 13:42:48 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:13.797 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.004 Initializing NVMe Controllers 00:31:26.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:26.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:26.004 Initialization complete. Launching workers. 00:31:26.004 ======================================================== 00:31:26.004 Latency(us) 00:31:26.004 Device Information : IOPS MiB/s Average min max 00:31:26.004 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.90 0.02 20040.40 285.30 48763.52 00:31:26.004 ======================================================== 00:31:26.004 Total : 49.90 0.02 20040.40 285.30 48763.52 00:31:26.004 00:31:26.004 13:42:58 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:26.004 13:42:58 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:26.004 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.971 Initializing NVMe Controllers 00:31:35.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:35.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:35.971 Initialization complete. Launching workers. 00:31:35.971 ======================================================== 00:31:35.971 Latency(us) 00:31:35.971 Device Information : IOPS MiB/s Average min max 00:31:35.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 76.50 9.56 13087.04 5979.34 50856.56 00:31:35.971 ======================================================== 00:31:35.971 Total : 76.50 9.56 13087.04 5979.34 50856.56 00:31:35.971 00:31:35.971 13:43:09 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:35.971 13:43:09 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:35.971 13:43:09 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:35.971 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.933 Initializing NVMe Controllers 00:31:45.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:45.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:45.933 Initialization complete. Launching workers. 00:31:45.933 ======================================================== 00:31:45.933 Latency(us) 00:31:45.933 Device Information : IOPS MiB/s Average min max 00:31:45.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4645.12 2.27 6887.75 646.10 13508.20 00:31:45.933 ======================================================== 00:31:45.933 Total : 4645.12 2.27 6887.75 646.10 13508.20 00:31:45.933 00:31:45.933 13:43:19 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:45.934 13:43:19 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:45.934 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.899 Initializing NVMe Controllers 00:31:55.899 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:55.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:55.899 Initialization complete. Launching workers. 00:31:55.899 ======================================================== 00:31:55.899 Latency(us) 00:31:55.899 Device Information : IOPS MiB/s Average min max 00:31:55.899 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2020.21 252.53 15845.99 881.92 32785.73 00:31:55.899 ======================================================== 00:31:55.899 Total : 2020.21 252.53 15845.99 881.92 32785.73 00:31:55.899 00:31:55.899 13:43:30 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:55.899 13:43:30 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:55.899 13:43:30 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:55.899 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.172 Initializing NVMe Controllers 00:32:08.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:08.172 Controller IO queue size 128, less than required. 00:32:08.172 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:08.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:08.172 Initialization complete. Launching workers. 00:32:08.172 ======================================================== 00:32:08.172 Latency(us) 00:32:08.172 Device Information : IOPS MiB/s Average min max 00:32:08.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8026.22 3.92 15953.46 2132.47 36555.09 00:32:08.172 ======================================================== 00:32:08.172 Total : 8026.22 3.92 15953.46 2132.47 36555.09 00:32:08.172 00:32:08.172 13:43:40 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:08.172 13:43:40 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:08.172 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.144 Initializing NVMe Controllers 00:32:18.144 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:18.144 Controller IO queue size 128, less than required. 00:32:18.144 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:18.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:18.144 Initialization complete. Launching workers. 00:32:18.144 ======================================================== 00:32:18.144 Latency(us) 00:32:18.144 Device Information : IOPS MiB/s Average min max 00:32:18.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1204.94 150.62 106834.09 24757.53 216431.37 00:32:18.144 ======================================================== 00:32:18.144 Total : 1204.94 150.62 106834.09 24757.53 216431.37 00:32:18.144 00:32:18.144 13:43:51 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:18.144 13:43:51 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2e80be4e-1471-420f-b504-b1e787fb27ae 00:32:18.144 13:43:52 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:18.401 13:43:52 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 294ae45c-96c0-427c-91ee-11d41fb1a64e 00:32:18.659 13:43:53 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:18.917 rmmod nvme_tcp 00:32:18.917 rmmod nvme_fabrics 00:32:18.917 rmmod nvme_keyring 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 393455 ']' 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 393455 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 393455 ']' 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 393455 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 393455 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 393455' 00:32:18.917 killing process with pid 393455 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 393455 00:32:18.917 13:43:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 393455 00:32:22.205 13:43:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:22.205 13:43:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:22.205 13:43:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:22.205 13:43:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:22.205 13:43:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:22.205 13:43:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.205 13:43:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:22.205 13:43:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.586 13:43:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:23.586 00:32:23.586 real 1m35.606s 00:32:23.586 user 5m49.941s 00:32:23.586 sys 0m16.541s 00:32:23.586 13:43:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:23.586 13:43:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:23.586 ************************************ 00:32:23.586 END TEST nvmf_perf 00:32:23.586 ************************************ 00:32:23.586 13:43:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:23.586 13:43:58 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:23.586 13:43:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:23.586 13:43:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:23.586 13:43:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.586 ************************************ 00:32:23.586 START TEST nvmf_fio_host 00:32:23.586 ************************************ 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:23.845 * Looking for test storage... 00:32:23.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:32:23.845 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:23.846 13:43:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:25.747 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:25.747 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:25.747 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:25.747 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:25.747 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:26.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:26.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:32:26.006 00:32:26.006 --- 10.0.0.2 ping statistics --- 00:32:26.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.006 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:26.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:26.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:32:26.006 00:32:26.006 --- 10.0.0.1 ping statistics --- 00:32:26.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.006 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=406055 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 406055 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 406055 ']' 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:26.006 13:44:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.006 [2024-07-13 13:44:00.643925] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:26.006 [2024-07-13 13:44:00.644068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:26.006 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.265 [2024-07-13 13:44:00.786301] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:26.523 [2024-07-13 13:44:01.048722] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:26.523 [2024-07-13 13:44:01.048801] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:26.523 [2024-07-13 13:44:01.048829] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:26.523 [2024-07-13 13:44:01.048850] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:26.523 [2024-07-13 13:44:01.048880] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:26.523 [2024-07-13 13:44:01.049004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.523 [2024-07-13 13:44:01.049077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:26.523 [2024-07-13 13:44:01.049172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.523 [2024-07-13 13:44:01.049182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:27.090 13:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:27.090 13:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:32:27.090 13:44:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:27.090 [2024-07-13 13:44:01.764919] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.090 13:44:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:27.090 13:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:27.090 13:44:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.090 13:44:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:27.656 Malloc1 00:32:27.656 13:44:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:27.656 13:44:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:27.914 13:44:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:28.172 [2024-07-13 13:44:02.851401] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.172 13:44:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:28.429 13:44:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:28.685 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:28.685 fio-3.35 00:32:28.685 Starting 1 thread 00:32:28.943 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.540 00:32:31.540 test: (groupid=0, jobs=1): err= 0: pid=406533: Sat Jul 13 13:44:05 2024 00:32:31.540 read: IOPS=6361, BW=24.8MiB/s (26.1MB/s)(49.9MiB/2009msec) 00:32:31.540 slat (usec): min=2, max=137, avg= 3.61, stdev= 1.98 00:32:31.540 clat (usec): min=3578, max=18555, avg=11076.19, stdev=898.88 00:32:31.540 lat (usec): min=3617, max=18558, avg=11079.80, stdev=898.80 00:32:31.540 clat percentiles (usec): 00:32:31.540 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10421], 00:32:31.540 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:32:31.540 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:32:31.540 | 99.00th=[13042], 99.50th=[13304], 99.90th=[17695], 99.95th=[18220], 00:32:31.540 | 99.99th=[18482] 00:32:31.540 bw ( KiB/s): min=24352, max=25928, per=99.92%, avg=25424.00, stdev=723.05, samples=4 00:32:31.540 iops : min= 6088, max= 6482, avg=6356.00, stdev=180.76, samples=4 00:32:31.540 write: IOPS=6362, BW=24.9MiB/s (26.1MB/s)(49.9MiB/2009msec); 0 zone resets 00:32:31.540 slat (usec): min=3, max=140, avg= 3.78, stdev= 1.68 00:32:31.540 clat (usec): min=1550, max=16585, avg=8952.66, stdev=764.10 00:32:31.540 lat (usec): min=1562, max=16589, avg=8956.44, stdev=764.08 00:32:31.540 clat percentiles (usec): 00:32:31.540 | 1.00th=[ 7177], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8455], 00:32:31.540 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:32:31.540 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10028], 00:32:31.540 | 99.00th=[10552], 99.50th=[10814], 99.90th=[15401], 99.95th=[16319], 00:32:31.540 | 99.99th=[16581] 00:32:31.540 bw ( KiB/s): min=25216, max=25664, per=99.99%, avg=25446.00, stdev=183.75, samples=4 00:32:31.540 iops : min= 6304, max= 6416, avg=6361.50, stdev=45.94, samples=4 00:32:31.540 lat (msec) : 2=0.01%, 4=0.08%, 10=51.72%, 20=48.18% 00:32:31.540 cpu : usr=62.05%, sys=34.16%, ctx=54, majf=0, minf=1537 00:32:31.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:31.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:31.540 issued rwts: total=12780,12782,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:31.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:31.540 00:32:31.540 Run status group 0 (all jobs): 00:32:31.540 READ: bw=24.8MiB/s (26.1MB/s), 24.8MiB/s-24.8MiB/s (26.1MB/s-26.1MB/s), io=49.9MiB (52.3MB), run=2009-2009msec 00:32:31.540 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=49.9MiB (52.4MB), run=2009-2009msec 00:32:31.540 ----------------------------------------------------- 00:32:31.540 Suppressions used: 00:32:31.540 count bytes template 00:32:31.540 1 57 /usr/src/fio/parse.c 00:32:31.540 1 8 libtcmalloc_minimal.so 00:32:31.540 ----------------------------------------------------- 00:32:31.540 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:31.540 13:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:31.799 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:31.799 fio-3.35 00:32:31.799 Starting 1 thread 00:32:31.799 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.321 00:32:34.321 test: (groupid=0, jobs=1): err= 0: pid=406864: Sat Jul 13 13:44:08 2024 00:32:34.321 read: IOPS=6247, BW=97.6MiB/s (102MB/s)(197MiB/2013msec) 00:32:34.321 slat (usec): min=3, max=104, avg= 4.99, stdev= 1.99 00:32:34.321 clat (usec): min=3406, max=24650, avg=12061.84, stdev=2668.54 00:32:34.321 lat (usec): min=3411, max=24655, avg=12066.83, stdev=2668.55 00:32:34.321 clat percentiles (usec): 00:32:34.321 | 1.00th=[ 6063], 5.00th=[ 7898], 10.00th=[ 8979], 20.00th=[10028], 00:32:34.321 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11863], 60.00th=[12387], 00:32:34.321 | 70.00th=[13304], 80.00th=[14353], 90.00th=[15533], 95.00th=[16319], 00:32:34.321 | 99.00th=[19268], 99.50th=[20317], 99.90th=[20841], 99.95th=[22938], 00:32:34.321 | 99.99th=[23987] 00:32:34.321 bw ( KiB/s): min=41760, max=56448, per=49.34%, avg=49320.00, stdev=6455.35, samples=4 00:32:34.321 iops : min= 2610, max= 3528, avg=3082.50, stdev=403.46, samples=4 00:32:34.321 write: IOPS=3561, BW=55.6MiB/s (58.3MB/s)(101MiB/1818msec); 0 zone resets 00:32:34.321 slat (usec): min=33, max=266, avg=36.82, stdev= 7.55 00:32:34.321 clat (usec): min=9358, max=26980, avg=15216.55, stdev=2795.36 00:32:34.321 lat (usec): min=9391, max=27014, avg=15253.37, stdev=2795.46 00:32:34.321 clat percentiles (usec): 00:32:34.321 | 1.00th=[10159], 5.00th=[11207], 10.00th=[11994], 20.00th=[12780], 00:32:34.321 | 30.00th=[13435], 40.00th=[14222], 50.00th=[14877], 60.00th=[15533], 00:32:34.321 | 70.00th=[16450], 80.00th=[17695], 90.00th=[19268], 95.00th=[20055], 00:32:34.321 | 99.00th=[22938], 99.50th=[23987], 99.90th=[25297], 99.95th=[26870], 00:32:34.321 | 99.99th=[26870] 00:32:34.321 bw ( KiB/s): min=43200, max=59392, per=90.39%, avg=51504.00, stdev=6842.44, samples=4 00:32:34.321 iops : min= 2700, max= 3712, avg=3219.00, stdev=427.65, samples=4 00:32:34.321 lat (msec) : 4=0.03%, 10=13.24%, 20=84.62%, 50=2.11% 00:32:34.321 cpu : usr=75.99%, sys=20.68%, ctx=28, majf=0, minf=2081 00:32:34.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:32:34.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:34.321 issued rwts: total=12577,6474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:34.321 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:34.321 00:32:34.321 Run status group 0 (all jobs): 00:32:34.321 READ: bw=97.6MiB/s (102MB/s), 97.6MiB/s-97.6MiB/s (102MB/s-102MB/s), io=197MiB (206MB), run=2013-2013msec 00:32:34.321 WRITE: bw=55.6MiB/s (58.3MB/s), 55.6MiB/s-55.6MiB/s (58.3MB/s-58.3MB/s), io=101MiB (106MB), run=1818-1818msec 00:32:34.321 ----------------------------------------------------- 00:32:34.321 Suppressions used: 00:32:34.321 count bytes template 00:32:34.321 1 57 /usr/src/fio/parse.c 00:32:34.321 108 10368 /usr/src/fio/iolog.c 00:32:34.321 1 8 libtcmalloc_minimal.so 00:32:34.321 ----------------------------------------------------- 00:32:34.321 00:32:34.321 13:44:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:34.578 13:44:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:34.578 13:44:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:34.578 13:44:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:34.578 13:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:32:34.578 13:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:32:34.578 13:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:34.578 13:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:34.578 13:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:32:34.578 13:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:32:34.578 13:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:32:34.578 13:44:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:32:37.859 Nvme0n1 00:32:37.859 13:44:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:41.137 13:44:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=4efa9e80-af74-4d55-9e32-db67a0770fcf 00:32:41.137 13:44:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 4efa9e80-af74-4d55-9e32-db67a0770fcf 00:32:41.137 13:44:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=4efa9e80-af74-4d55-9e32-db67a0770fcf 00:32:41.137 13:44:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:41.137 13:44:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:41.137 13:44:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:41.137 13:44:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:41.137 13:44:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:41.137 { 00:32:41.137 "uuid": "4efa9e80-af74-4d55-9e32-db67a0770fcf", 00:32:41.137 "name": "lvs_0", 00:32:41.137 "base_bdev": "Nvme0n1", 00:32:41.137 "total_data_clusters": 930, 00:32:41.137 "free_clusters": 930, 00:32:41.137 "block_size": 512, 00:32:41.137 "cluster_size": 1073741824 00:32:41.137 } 00:32:41.137 ]' 00:32:41.137 13:44:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="4efa9e80-af74-4d55-9e32-db67a0770fcf") .free_clusters' 00:32:41.137 13:44:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:32:41.137 13:44:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="4efa9e80-af74-4d55-9e32-db67a0770fcf") .cluster_size' 00:32:41.137 13:44:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:32:41.137 13:44:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:32:41.137 13:44:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:32:41.137 952320 00:32:41.137 13:44:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:41.394 72e7541a-109d-472b-82e1-471ca978b352 00:32:41.394 13:44:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:41.652 13:44:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:41.909 13:44:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:42.167 13:44:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:42.425 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:42.425 fio-3.35 00:32:42.425 Starting 1 thread 00:32:42.425 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.951 00:32:44.951 test: (groupid=0, jobs=1): err= 0: pid=408256: Sat Jul 13 13:44:19 2024 00:32:44.951 read: IOPS=4486, BW=17.5MiB/s (18.4MB/s)(35.2MiB/2011msec) 00:32:44.951 slat (usec): min=3, max=177, avg= 3.87, stdev= 2.67 00:32:44.951 clat (usec): min=1218, max=172745, avg=15618.53, stdev=13070.36 00:32:44.951 lat (usec): min=1223, max=172797, avg=15622.40, stdev=13070.79 00:32:44.951 clat percentiles (msec): 00:32:44.951 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:32:44.951 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 15], 00:32:44.951 | 70.00th=[ 16], 80.00th=[ 16], 90.00th=[ 17], 95.00th=[ 17], 00:32:44.951 | 99.00th=[ 19], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:32:44.951 | 99.99th=[ 174] 00:32:44.951 bw ( KiB/s): min=12744, max=19752, per=99.84%, avg=17916.00, stdev=3449.95, samples=4 00:32:44.951 iops : min= 3186, max= 4938, avg=4479.00, stdev=862.49, samples=4 00:32:44.951 write: IOPS=4481, BW=17.5MiB/s (18.4MB/s)(35.2MiB/2011msec); 0 zone resets 00:32:44.951 slat (usec): min=3, max=135, avg= 4.08, stdev= 1.98 00:32:44.951 clat (usec): min=358, max=170218, avg=12679.82, stdev=12333.48 00:32:44.951 lat (usec): min=367, max=170227, avg=12683.90, stdev=12333.94 00:32:44.951 clat percentiles (msec): 00:32:44.951 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:32:44.951 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:32:44.951 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 14], 00:32:44.951 | 99.00th=[ 16], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:32:44.951 | 99.99th=[ 171] 00:32:44.951 bw ( KiB/s): min=13352, max=19648, per=99.85%, avg=17900.00, stdev=3036.52, samples=4 00:32:44.951 iops : min= 3338, max= 4912, avg=4475.00, stdev=759.13, samples=4 00:32:44.951 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:44.951 lat (msec) : 2=0.02%, 4=0.11%, 10=2.48%, 20=96.51%, 50=0.14% 00:32:44.951 lat (msec) : 250=0.71% 00:32:44.951 cpu : usr=62.69%, sys=34.13%, ctx=92, majf=0, minf=1533 00:32:44.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:32:44.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:44.951 issued rwts: total=9022,9013,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.951 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:44.951 00:32:44.951 Run status group 0 (all jobs): 00:32:44.951 READ: bw=17.5MiB/s (18.4MB/s), 17.5MiB/s-17.5MiB/s (18.4MB/s-18.4MB/s), io=35.2MiB (37.0MB), run=2011-2011msec 00:32:44.951 WRITE: bw=17.5MiB/s (18.4MB/s), 17.5MiB/s-17.5MiB/s (18.4MB/s-18.4MB/s), io=35.2MiB (36.9MB), run=2011-2011msec 00:32:44.951 ----------------------------------------------------- 00:32:44.951 Suppressions used: 00:32:44.951 count bytes template 00:32:44.951 1 58 /usr/src/fio/parse.c 00:32:44.951 1 8 libtcmalloc_minimal.so 00:32:44.951 ----------------------------------------------------- 00:32:44.951 00:32:44.951 13:44:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:45.209 13:44:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:46.583 13:44:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=cde93755-1c44-4a7a-a36e-dbbb8969e7d0 00:32:46.583 13:44:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb cde93755-1c44-4a7a-a36e-dbbb8969e7d0 00:32:46.583 13:44:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=cde93755-1c44-4a7a-a36e-dbbb8969e7d0 00:32:46.583 13:44:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:46.583 13:44:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:46.583 13:44:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:46.583 13:44:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:46.583 13:44:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:46.583 { 00:32:46.583 "uuid": "4efa9e80-af74-4d55-9e32-db67a0770fcf", 00:32:46.583 "name": "lvs_0", 00:32:46.583 "base_bdev": "Nvme0n1", 00:32:46.583 "total_data_clusters": 930, 00:32:46.583 "free_clusters": 0, 00:32:46.583 "block_size": 512, 00:32:46.583 "cluster_size": 1073741824 00:32:46.583 }, 00:32:46.583 { 00:32:46.583 "uuid": "cde93755-1c44-4a7a-a36e-dbbb8969e7d0", 00:32:46.583 "name": "lvs_n_0", 00:32:46.583 "base_bdev": "72e7541a-109d-472b-82e1-471ca978b352", 00:32:46.583 "total_data_clusters": 237847, 00:32:46.583 "free_clusters": 237847, 00:32:46.583 "block_size": 512, 00:32:46.583 "cluster_size": 4194304 00:32:46.583 } 00:32:46.583 ]' 00:32:46.583 13:44:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="cde93755-1c44-4a7a-a36e-dbbb8969e7d0") .free_clusters' 00:32:46.583 13:44:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:32:46.583 13:44:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="cde93755-1c44-4a7a-a36e-dbbb8969e7d0") .cluster_size' 00:32:46.841 13:44:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:46.841 13:44:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:32:46.841 13:44:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:32:46.841 951388 00:32:46.841 13:44:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:47.776 bd1be4ec-c9c1-4cde-8439-948aac7794b6 00:32:47.776 13:44:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:48.034 13:44:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:48.291 13:44:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:48.548 13:44:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:48.805 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:48.805 fio-3.35 00:32:48.805 Starting 1 thread 00:32:48.805 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.413 00:32:51.413 test: (groupid=0, jobs=1): err= 0: pid=409110: Sat Jul 13 13:44:26 2024 00:32:51.413 read: IOPS=4281, BW=16.7MiB/s (17.5MB/s)(33.6MiB/2012msec) 00:32:51.413 slat (usec): min=2, max=194, avg= 3.84, stdev= 2.96 00:32:51.413 clat (usec): min=6136, max=26357, avg=16449.37, stdev=1486.06 00:32:51.413 lat (usec): min=6142, max=26360, avg=16453.21, stdev=1485.93 00:32:51.413 clat percentiles (usec): 00:32:51.413 | 1.00th=[12911], 5.00th=[14222], 10.00th=[14746], 20.00th=[15270], 00:32:51.413 | 30.00th=[15664], 40.00th=[16057], 50.00th=[16450], 60.00th=[16712], 00:32:51.413 | 70.00th=[17171], 80.00th=[17695], 90.00th=[18220], 95.00th=[18744], 00:32:51.413 | 99.00th=[20055], 99.50th=[21103], 99.90th=[24249], 99.95th=[25822], 00:32:51.413 | 99.99th=[26346] 00:32:51.413 bw ( KiB/s): min=16120, max=17656, per=99.68%, avg=17070.00, stdev=673.50, samples=4 00:32:51.413 iops : min= 4030, max= 4414, avg=4267.50, stdev=168.38, samples=4 00:32:51.413 write: IOPS=4282, BW=16.7MiB/s (17.5MB/s)(33.7MiB/2012msec); 0 zone resets 00:32:51.413 slat (usec): min=3, max=124, avg= 4.00, stdev= 2.09 00:32:51.413 clat (usec): min=2971, max=24113, avg=13238.18, stdev=1309.94 00:32:51.413 lat (usec): min=2979, max=24116, avg=13242.18, stdev=1309.89 00:32:51.413 clat percentiles (usec): 00:32:51.413 | 1.00th=[10421], 5.00th=[11338], 10.00th=[11731], 20.00th=[12256], 00:32:51.413 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13566], 00:32:51.413 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14746], 95.00th=[15139], 00:32:51.413 | 99.00th=[16581], 99.50th=[17171], 99.90th=[21890], 99.95th=[22414], 00:32:51.413 | 99.99th=[23987] 00:32:51.413 bw ( KiB/s): min=17024, max=17264, per=100.00%, avg=17138.00, stdev=99.57, samples=4 00:32:51.413 iops : min= 4256, max= 4316, avg=4284.50, stdev=24.89, samples=4 00:32:51.413 lat (msec) : 4=0.02%, 10=0.35%, 20=99.09%, 50=0.55% 00:32:51.413 cpu : usr=60.42%, sys=36.65%, ctx=81, majf=0, minf=1534 00:32:51.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:51.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:51.413 issued rwts: total=8614,8617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:51.413 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:51.413 00:32:51.413 Run status group 0 (all jobs): 00:32:51.413 READ: bw=16.7MiB/s (17.5MB/s), 16.7MiB/s-16.7MiB/s (17.5MB/s-17.5MB/s), io=33.6MiB (35.3MB), run=2012-2012msec 00:32:51.413 WRITE: bw=16.7MiB/s (17.5MB/s), 16.7MiB/s-16.7MiB/s (17.5MB/s-17.5MB/s), io=33.7MiB (35.3MB), run=2012-2012msec 00:32:51.671 ----------------------------------------------------- 00:32:51.671 Suppressions used: 00:32:51.671 count bytes template 00:32:51.671 1 58 /usr/src/fio/parse.c 00:32:51.671 1 8 libtcmalloc_minimal.so 00:32:51.671 ----------------------------------------------------- 00:32:51.671 00:32:51.671 13:44:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:51.930 13:44:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:51.930 13:44:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:56.113 13:44:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:56.371 13:44:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:59.649 13:44:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:59.649 13:44:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:01.548 rmmod nvme_tcp 00:33:01.548 rmmod nvme_fabrics 00:33:01.548 rmmod nvme_keyring 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 406055 ']' 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 406055 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 406055 ']' 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 406055 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 406055 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:01.548 13:44:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:01.549 13:44:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 406055' 00:33:01.549 killing process with pid 406055 00:33:01.549 13:44:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 406055 00:33:01.549 13:44:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 406055 00:33:02.924 13:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:02.924 13:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:02.924 13:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:02.924 13:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:02.924 13:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:02.924 13:44:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.924 13:44:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:02.924 13:44:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.458 13:44:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:05.458 00:33:05.458 real 0m41.382s 00:33:05.458 user 2m35.552s 00:33:05.458 sys 0m8.487s 00:33:05.458 13:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:05.458 13:44:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.458 ************************************ 00:33:05.458 END TEST nvmf_fio_host 00:33:05.458 ************************************ 00:33:05.458 13:44:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:05.458 13:44:39 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:05.458 13:44:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:05.458 13:44:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:05.458 13:44:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:05.458 ************************************ 00:33:05.458 START TEST nvmf_failover 00:33:05.458 ************************************ 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:05.458 * Looking for test storage... 00:33:05.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:05.458 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:33:05.459 13:44:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:07.360 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:07.361 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:07.361 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:07.361 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:07.361 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:07.361 13:44:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:07.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:07.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:33:07.361 00:33:07.361 --- 10.0.0.2 ping statistics --- 00:33:07.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.361 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:07.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:07.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:33:07.361 00:33:07.361 --- 10.0.0.1 ping statistics --- 00:33:07.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.361 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=412608 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 412608 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 412608 ']' 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:07.361 13:44:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:07.619 [2024-07-13 13:44:42.144813] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:07.619 [2024-07-13 13:44:42.145025] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.619 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.619 [2024-07-13 13:44:42.302153] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:07.877 [2024-07-13 13:44:42.568070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.877 [2024-07-13 13:44:42.568134] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.877 [2024-07-13 13:44:42.568179] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:07.877 [2024-07-13 13:44:42.568197] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:07.877 [2024-07-13 13:44:42.568215] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.877 [2024-07-13 13:44:42.568377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:07.877 [2024-07-13 13:44:42.568838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:07.877 [2024-07-13 13:44:42.568950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.440 13:44:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:08.440 13:44:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:33:08.440 13:44:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:08.440 13:44:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:08.440 13:44:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:08.440 13:44:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:08.440 13:44:43 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:08.697 [2024-07-13 13:44:43.384511] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:08.697 13:44:43 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:09.262 Malloc0 00:33:09.262 13:44:43 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:09.519 13:44:44 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:09.519 13:44:44 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:09.777 [2024-07-13 13:44:44.470696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.777 13:44:44 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:10.034 [2024-07-13 13:44:44.711427] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:10.034 13:44:44 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:10.292 [2024-07-13 13:44:44.956340] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:10.292 13:44:44 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=412931 00:33:10.292 13:44:44 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:10.292 13:44:44 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:10.292 13:44:44 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 412931 /var/tmp/bdevperf.sock 00:33:10.292 13:44:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 412931 ']' 00:33:10.292 13:44:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:10.292 13:44:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:10.292 13:44:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:10.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:10.292 13:44:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:10.292 13:44:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:11.700 13:44:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:11.700 13:44:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:33:11.700 13:44:45 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:11.700 NVMe0n1 00:33:11.700 13:44:46 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:11.958 00:33:11.958 13:44:46 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=413171 00:33:11.958 13:44:46 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:11.958 13:44:46 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:13.333 13:44:47 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:13.333 [2024-07-13 13:44:47.886017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.333 [2024-07-13 13:44:47.886100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.333 [2024-07-13 13:44:47.886139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.333 [2024-07-13 13:44:47.886164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.333 [2024-07-13 13:44:47.886184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.333 [2024-07-13 13:44:47.886202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.333 [2024-07-13 13:44:47.886219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.886993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 [2024-07-13 13:44:47.887623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:33:13.334 13:44:47 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:16.620 13:44:50 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:16.620 00:33:16.620 13:44:51 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:16.879 [2024-07-13 13:44:51.530234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.879 [2024-07-13 13:44:51.530312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.879 [2024-07-13 13:44:51.530350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.879 [2024-07-13 13:44:51.530368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.879 [2024-07-13 13:44:51.530385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.879 [2024-07-13 13:44:51.530419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.879 [2024-07-13 13:44:51.530438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:33:16.879 13:44:51 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:20.162 13:44:54 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:20.162 [2024-07-13 13:44:54.775235] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.162 13:44:54 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:21.097 13:44:55 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:21.355 13:44:56 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 413171 00:33:27.912 0 00:33:27.912 13:45:01 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 412931 00:33:27.912 13:45:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 412931 ']' 00:33:27.912 13:45:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 412931 00:33:27.912 13:45:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:33:27.912 13:45:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:27.912 13:45:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 412931 00:33:27.912 13:45:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:27.912 13:45:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:27.912 13:45:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 412931' 00:33:27.912 killing process with pid 412931 00:33:27.912 13:45:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 412931 00:33:27.912 13:45:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 412931 00:33:28.177 13:45:02 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:28.177 [2024-07-13 13:44:45.060213] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:28.177 [2024-07-13 13:44:45.060383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412931 ] 00:33:28.177 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.177 [2024-07-13 13:44:45.189034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.177 [2024-07-13 13:44:45.422018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.177 Running I/O for 15 seconds... 00:33:28.177 [2024-07-13 13:44:47.889094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.889973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.889993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.890015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.890035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.890058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.890078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.890100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.890120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.890157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.177 [2024-07-13 13:44:47.890179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.890201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.177 [2024-07-13 13:44:47.890226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.177 [2024-07-13 13:44:47.890250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.178 [2024-07-13 13:44:47.890272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.890315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.890358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.890401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.890444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.890487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.890530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.890572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.890615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.890658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.890701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.890744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.890792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.890835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.890886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.890931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.890975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.890997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.891955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.891978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.178 [2024-07-13 13:44:47.892000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.892022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.178 [2024-07-13 13:44:47.892044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.892067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.178 [2024-07-13 13:44:47.892088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.178 [2024-07-13 13:44:47.892111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.892961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.892985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.893967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.893988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.894010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.179 [2024-07-13 13:44:47.894031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.179 [2024-07-13 13:44:47.894053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.180 [2024-07-13 13:44:47.894074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.894096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.180 [2024-07-13 13:44:47.894117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.894139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.180 [2024-07-13 13:44:47.894160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.894209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.180 [2024-07-13 13:44:47.894235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56272 len:8 PRP1 0x0 PRP2 0x0 00:33:28.180 [2024-07-13 13:44:47.894256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.894283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.180 [2024-07-13 13:44:47.894302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.180 [2024-07-13 13:44:47.894328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56280 len:8 PRP1 0x0 PRP2 0x0 00:33:28.180 [2024-07-13 13:44:47.894348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.894369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.180 [2024-07-13 13:44:47.894385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.180 [2024-07-13 13:44:47.894402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56288 len:8 PRP1 0x0 PRP2 0x0 00:33:28.180 [2024-07-13 13:44:47.894421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.894440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.180 [2024-07-13 13:44:47.894456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.180 [2024-07-13 13:44:47.894473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56296 len:8 PRP1 0x0 PRP2 0x0 00:33:28.180 [2024-07-13 13:44:47.894491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.894510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.180 [2024-07-13 13:44:47.894526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.180 [2024-07-13 13:44:47.894543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56304 len:8 PRP1 0x0 PRP2 0x0 00:33:28.180 [2024-07-13 13:44:47.894562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.894581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.180 [2024-07-13 13:44:47.894598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.180 [2024-07-13 13:44:47.894616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56312 len:8 PRP1 0x0 PRP2 0x0 00:33:28.180 [2024-07-13 13:44:47.894634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.894653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.180 [2024-07-13 13:44:47.894670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.180 [2024-07-13 13:44:47.894687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56320 len:8 PRP1 0x0 PRP2 0x0 00:33:28.180 [2024-07-13 13:44:47.894705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.894724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.180 [2024-07-13 13:44:47.894741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.180 [2024-07-13 13:44:47.894758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56328 len:8 PRP1 0x0 PRP2 0x0 00:33:28.180 [2024-07-13 13:44:47.894780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.894800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.180 [2024-07-13 13:44:47.894816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.180 [2024-07-13 13:44:47.894834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56336 len:8 PRP1 0x0 PRP2 0x0 00:33:28.180 [2024-07-13 13:44:47.894852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.894877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.180 [2024-07-13 13:44:47.894895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.180 [2024-07-13 13:44:47.894912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56344 len:8 PRP1 0x0 PRP2 0x0 00:33:28.180 [2024-07-13 13:44:47.894930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.894949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.180 [2024-07-13 13:44:47.894965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.180 [2024-07-13 13:44:47.894982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56352 len:8 PRP1 0x0 PRP2 0x0 00:33:28.180 [2024-07-13 13:44:47.895000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.895019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.180 [2024-07-13 13:44:47.895035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.180 [2024-07-13 13:44:47.895052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56360 len:8 PRP1 0x0 PRP2 0x0 00:33:28.180 [2024-07-13 13:44:47.895070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.895089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.180 [2024-07-13 13:44:47.895106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.180 [2024-07-13 13:44:47.895123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56368 len:8 PRP1 0x0 PRP2 0x0 00:33:28.180 [2024-07-13 13:44:47.895142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.895161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.180 [2024-07-13 13:44:47.895178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.180 [2024-07-13 13:44:47.895195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56376 len:8 PRP1 0x0 PRP2 0x0 00:33:28.180 [2024-07-13 13:44:47.895214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.895232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.180 [2024-07-13 13:44:47.895249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.180 [2024-07-13 13:44:47.895266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56384 len:8 PRP1 0x0 PRP2 0x0 00:33:28.180 [2024-07-13 13:44:47.895284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.895303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.180 [2024-07-13 13:44:47.895319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.180 [2024-07-13 13:44:47.895351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56392 len:8 PRP1 0x0 PRP2 0x0 00:33:28.180 [2024-07-13 13:44:47.895371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.895653] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2f00 was disconnected and freed. reset controller. 00:33:28.180 [2024-07-13 13:44:47.895683] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:28.180 [2024-07-13 13:44:47.895746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.180 [2024-07-13 13:44:47.895772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.895795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.180 [2024-07-13 13:44:47.895815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.895835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.180 [2024-07-13 13:44:47.895854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.895888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.180 [2024-07-13 13:44:47.895912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:47.895930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.180 [2024-07-13 13:44:47.896021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:28.180 [2024-07-13 13:44:47.900369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.180 [2024-07-13 13:44:47.943130] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:28.180 [2024-07-13 13:44:51.532069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-13 13:44:51.532129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:51.532170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-13 13:44:51.532194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:51.532219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-13 13:44:51.532241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:51.532264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-13 13:44:51.532286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:51.532309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-13 13:44:51.532330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:51.532354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.180 [2024-07-13 13:44:51.532385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.180 [2024-07-13 13:44:51.532409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.532430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.532453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.532474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.532497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.532518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.532541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.532561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.532583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.532604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.532627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.532648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.532672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.532693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.532716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.532737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.532759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.532781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.532803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.532824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.532847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.532875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.532901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.532922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.532950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.532971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.532994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.533014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.533057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.533101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.533145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.533198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.533242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.533285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.533329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.533372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.533415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.533459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.533508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.181 [2024-07-13 13:44:51.533554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.181 [2024-07-13 13:44:51.533597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.181 [2024-07-13 13:44:51.533641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.181 [2024-07-13 13:44:51.533685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.181 [2024-07-13 13:44:51.533728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.181 [2024-07-13 13:44:51.533771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.181 [2024-07-13 13:44:51.533815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.181 [2024-07-13 13:44:51.533837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.181 [2024-07-13 13:44:51.533858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.533892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.533914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.533936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.533957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.533979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.534965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.534987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.182 [2024-07-13 13:44:51.535707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.182 [2024-07-13 13:44:51.535781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112600 len:8 PRP1 0x0 PRP2 0x0 00:33:28.182 [2024-07-13 13:44:51.535806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.182 [2024-07-13 13:44:51.535833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.535853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.535880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112608 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.535901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.535922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.535939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.535957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112616 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.535975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.535994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.536010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.536027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112624 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.536045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.536064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.536080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.536097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112632 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.536116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.536134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.536151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.536168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112640 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.536186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.536206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.536222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.536239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112648 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.536257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.536276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.536292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.536309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112656 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.536327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.536346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.536362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.536383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112664 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.536402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.536421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.536438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.536455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112672 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.536474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.536493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.536509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.536526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112680 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.536545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.536564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.536580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.536597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112688 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.536615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.536634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.536650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.536666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112696 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.536685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.536703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.536720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.536737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112704 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.536755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.536774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.536790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.536807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112712 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.536825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.536843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.536860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.536883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112720 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.536903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.536926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.536943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.536961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112728 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.536979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.536997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.537014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.537031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112736 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.537050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.537068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.537085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.537101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112744 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.537133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.537154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.537172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.537189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112752 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.537207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.537225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.537241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.537258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112760 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.537276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.537294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.537311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.537327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112768 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.537346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.537365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.537381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.537397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112776 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.537416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.537435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.537451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.537467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112784 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.537489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.537509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.537526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.537542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112792 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.537560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.183 [2024-07-13 13:44:51.537578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.183 [2024-07-13 13:44:51.537595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.183 [2024-07-13 13:44:51.537612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112800 len:8 PRP1 0x0 PRP2 0x0 00:33:28.183 [2024-07-13 13:44:51.537630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.537649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.537665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.537682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112808 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.537700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.537720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.537736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.537753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112816 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.537771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.537790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.537807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.537824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112824 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.537842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.537860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.537883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.537901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112832 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.537920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.537939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.537955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.537972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112840 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.537990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.538009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.538025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.538046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112848 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.538064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.538083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.538100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.538116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112856 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.538134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.538153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.538170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.538187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112864 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.538205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.538223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.538239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.538256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112872 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.538275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.538294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.538310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.538327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112880 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.538345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.538364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.538381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.538398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112888 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.538416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.538435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.538451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.538468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112896 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.538486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.538505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.538522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.538538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112904 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.538557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.538575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.538595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.538613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112912 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.538631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.538650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.538666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.538683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112920 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.538701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.538720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.538737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.538754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112928 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.538772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.538790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.538807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.538824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112936 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.538842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.538861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.538886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.538904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112944 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.538923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.538942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.538958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.538975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112952 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.538993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.539012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.539028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.539045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112960 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.539063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.539082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.539098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.539115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112200 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.539134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.539157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.184 [2024-07-13 13:44:51.539173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.184 [2024-07-13 13:44:51.539190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112208 len:8 PRP1 0x0 PRP2 0x0 00:33:28.184 [2024-07-13 13:44:51.539208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.539482] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3180 was disconnected and freed. reset controller. 00:33:28.184 [2024-07-13 13:44:51.539512] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:28.184 [2024-07-13 13:44:51.539561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.184 [2024-07-13 13:44:51.539587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.539610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.184 [2024-07-13 13:44:51.539630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.539651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.184 [2024-07-13 13:44:51.539670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.539690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.184 [2024-07-13 13:44:51.539709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.184 [2024-07-13 13:44:51.539728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.185 [2024-07-13 13:44:51.539819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:28.185 [2024-07-13 13:44:51.543895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.185 [2024-07-13 13:44:51.670254] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:28.185 [2024-07-13 13:44:56.038158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.038281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.038323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.038348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.038373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.038395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.038418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.038440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.038463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.038497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.038522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.038543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.038566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.038588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.038610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.038631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.038665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.038686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.038709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.038731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.038754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.038774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.038798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.038819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.038842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.038863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.038898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.038920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.038943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.038965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.038988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.039009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.039032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.039053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.039081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.039103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.039127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.039148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.039171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.039192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.039215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.039236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.039259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.039279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.039302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.039324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.039347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.039367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.039390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.039411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.039434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.039455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.039477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.039498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.039520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.039541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.039564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.039585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.039608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.039633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.185 [2024-07-13 13:44:56.039657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.185 [2024-07-13 13:44:56.039678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.039701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.186 [2024-07-13 13:44:56.039722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.039745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.039766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.039789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.039810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.039832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.039852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.039883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.039905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.039928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.039950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.039972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.039993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.040016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.040037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.040060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.040081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.040103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.040125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.040147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.040169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.040196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.040218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.040242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.040263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.040285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.040306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.040329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.040351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.040374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.040395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.040418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.040440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.186 [2024-07-13 13:44:56.040462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.186 [2024-07-13 13:44:56.040484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.040506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.040527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.040550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.040572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.040595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.040615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.040638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.040659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.040682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.040703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.040725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.040750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.040774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.040795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.040818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.040839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.040861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.040890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.040914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.040936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.040959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.040981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.187 [2024-07-13 13:44:56.041904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.041959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.187 [2024-07-13 13:44:56.041986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101040 len:8 PRP1 0x0 PRP2 0x0 00:33:28.187 [2024-07-13 13:44:56.042005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.042031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.187 [2024-07-13 13:44:56.042049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.187 [2024-07-13 13:44:56.042067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101048 len:8 PRP1 0x0 PRP2 0x0 00:33:28.187 [2024-07-13 13:44:56.042086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.042106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.187 [2024-07-13 13:44:56.042122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.187 [2024-07-13 13:44:56.042139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101056 len:8 PRP1 0x0 PRP2 0x0 00:33:28.187 [2024-07-13 13:44:56.042158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.042177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.187 [2024-07-13 13:44:56.042193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.187 [2024-07-13 13:44:56.042210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101064 len:8 PRP1 0x0 PRP2 0x0 00:33:28.187 [2024-07-13 13:44:56.042228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.042247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.187 [2024-07-13 13:44:56.042263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.187 [2024-07-13 13:44:56.042280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101072 len:8 PRP1 0x0 PRP2 0x0 00:33:28.187 [2024-07-13 13:44:56.042298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.042317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.187 [2024-07-13 13:44:56.042333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.187 [2024-07-13 13:44:56.042350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101080 len:8 PRP1 0x0 PRP2 0x0 00:33:28.187 [2024-07-13 13:44:56.042368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.042387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.187 [2024-07-13 13:44:56.042403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.187 [2024-07-13 13:44:56.042420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101088 len:8 PRP1 0x0 PRP2 0x0 00:33:28.187 [2024-07-13 13:44:56.042438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.187 [2024-07-13 13:44:56.042457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.042473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.042490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101096 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.042513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.042533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.042549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.042567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101104 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.042585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.042604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.042621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.042638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101112 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.042656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.042675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.042691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.042708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101120 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.042727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.042745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.042761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.042778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101128 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.042797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.042815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.042832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.042849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101136 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.042874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.042896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.042913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.042930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101144 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.042948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.042967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.042983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.043000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101152 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.043019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.043037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.043054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.043075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101160 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.043094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.043113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.043129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.043146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101168 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.043164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.043182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.043199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.043215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101176 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.043233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.043251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.043267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.043284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101184 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.043302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.043321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.043337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.043353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101192 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.043372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.043390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.043407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.043424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101200 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.043442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.043460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.043476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.043493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101208 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.043511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.043529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.043545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.043562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101216 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.043580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.043606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.043628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.043686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101224 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.043706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.043726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.043743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.043760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101232 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.043778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.043797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.043813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.043830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101240 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.043859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.043886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.043922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.043941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101248 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.043960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.043979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.043996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.044014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101256 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.044032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.044051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.044067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.044084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101264 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.044102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.044121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.044137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.044154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101272 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.044179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.044198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.044214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.044231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101280 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.044249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.044273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.188 [2024-07-13 13:44:56.044291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.188 [2024-07-13 13:44:56.044308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101288 len:8 PRP1 0x0 PRP2 0x0 00:33:28.188 [2024-07-13 13:44:56.044327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.188 [2024-07-13 13:44:56.044345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.189 [2024-07-13 13:44:56.044362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.189 [2024-07-13 13:44:56.044378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101296 len:8 PRP1 0x0 PRP2 0x0 00:33:28.189 [2024-07-13 13:44:56.044396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.044415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.189 [2024-07-13 13:44:56.044431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.189 [2024-07-13 13:44:56.044448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101304 len:8 PRP1 0x0 PRP2 0x0 00:33:28.189 [2024-07-13 13:44:56.044466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.044486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.189 [2024-07-13 13:44:56.044502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.189 [2024-07-13 13:44:56.044519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101312 len:8 PRP1 0x0 PRP2 0x0 00:33:28.189 [2024-07-13 13:44:56.044537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.044556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.189 [2024-07-13 13:44:56.044572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.189 [2024-07-13 13:44:56.044589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101320 len:8 PRP1 0x0 PRP2 0x0 00:33:28.189 [2024-07-13 13:44:56.044607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.044626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.189 [2024-07-13 13:44:56.044642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.189 [2024-07-13 13:44:56.044659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101328 len:8 PRP1 0x0 PRP2 0x0 00:33:28.189 [2024-07-13 13:44:56.044677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.044696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.189 [2024-07-13 13:44:56.044712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.189 [2024-07-13 13:44:56.044729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101336 len:8 PRP1 0x0 PRP2 0x0 00:33:28.189 [2024-07-13 13:44:56.044747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.044766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.189 [2024-07-13 13:44:56.044783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.189 [2024-07-13 13:44:56.044803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101344 len:8 PRP1 0x0 PRP2 0x0 00:33:28.189 [2024-07-13 13:44:56.044822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.044842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.189 [2024-07-13 13:44:56.044858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.189 [2024-07-13 13:44:56.044882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101352 len:8 PRP1 0x0 PRP2 0x0 00:33:28.189 [2024-07-13 13:44:56.044902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.044921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.189 [2024-07-13 13:44:56.044937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.189 [2024-07-13 13:44:56.044954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101360 len:8 PRP1 0x0 PRP2 0x0 00:33:28.189 [2024-07-13 13:44:56.044972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.044991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.189 [2024-07-13 13:44:56.045007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.189 [2024-07-13 13:44:56.045023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101368 len:8 PRP1 0x0 PRP2 0x0 00:33:28.189 [2024-07-13 13:44:56.045042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.045060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.189 [2024-07-13 13:44:56.045076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.189 [2024-07-13 13:44:56.045093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101376 len:8 PRP1 0x0 PRP2 0x0 00:33:28.189 [2024-07-13 13:44:56.045111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.045129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.189 [2024-07-13 13:44:56.045146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.189 [2024-07-13 13:44:56.045163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101384 len:8 PRP1 0x0 PRP2 0x0 00:33:28.189 [2024-07-13 13:44:56.045181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.045199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.189 [2024-07-13 13:44:56.045215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.189 [2024-07-13 13:44:56.045232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101392 len:8 PRP1 0x0 PRP2 0x0 00:33:28.189 [2024-07-13 13:44:56.045250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.045268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.189 [2024-07-13 13:44:56.045284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.189 [2024-07-13 13:44:56.045301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101400 len:8 PRP1 0x0 PRP2 0x0 00:33:28.189 [2024-07-13 13:44:56.045320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.045338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.189 [2024-07-13 13:44:56.045358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.189 [2024-07-13 13:44:56.045375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100640 len:8 PRP1 0x0 PRP2 0x0 00:33:28.189 [2024-07-13 13:44:56.045396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.045416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:28.189 [2024-07-13 13:44:56.045434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:28.189 [2024-07-13 13:44:56.045451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100648 len:8 PRP1 0x0 PRP2 0x0 00:33:28.189 [2024-07-13 13:44:56.045469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.045743] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3900 was disconnected and freed. reset controller. 00:33:28.189 [2024-07-13 13:44:56.045772] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:28.189 [2024-07-13 13:44:56.045823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.189 [2024-07-13 13:44:56.045850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.045880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.189 [2024-07-13 13:44:56.045903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.045923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.189 [2024-07-13 13:44:56.045943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.045963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:28.189 [2024-07-13 13:44:56.045983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.189 [2024-07-13 13:44:56.046001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.189 [2024-07-13 13:44:56.046079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:28.189 [2024-07-13 13:44:56.050124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.189 [2024-07-13 13:44:56.096534] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:28.189 00:33:28.189 Latency(us) 00:33:28.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.189 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:28.189 Verification LBA range: start 0x0 length 0x4000 00:33:28.189 NVMe0n1 : 15.02 6096.01 23.81 452.05 0.00 19513.05 1104.40 22816.24 00:33:28.189 =================================================================================================================== 00:33:28.189 Total : 6096.01 23.81 452.05 0.00 19513.05 1104.40 22816.24 00:33:28.189 Received shutdown signal, test time was about 15.000000 seconds 00:33:28.189 00:33:28.189 Latency(us) 00:33:28.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.189 =================================================================================================================== 00:33:28.189 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:28.189 13:45:02 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:28.189 13:45:02 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:28.189 13:45:02 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:28.189 13:45:02 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=415128 00:33:28.189 13:45:02 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:28.189 13:45:02 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 415128 /var/tmp/bdevperf.sock 00:33:28.189 13:45:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 415128 ']' 00:33:28.189 13:45:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:28.189 13:45:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:28.189 13:45:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:28.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:28.189 13:45:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:28.189 13:45:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:29.122 13:45:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:29.122 13:45:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:33:29.122 13:45:03 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:29.380 [2024-07-13 13:45:04.057073] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:29.380 13:45:04 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:29.637 [2024-07-13 13:45:04.329952] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:29.638 13:45:04 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:30.203 NVMe0n1 00:33:30.203 13:45:04 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:30.461 00:33:30.461 13:45:05 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:31.027 00:33:31.027 13:45:05 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:31.027 13:45:05 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:31.027 13:45:05 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:31.285 13:45:05 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:34.599 13:45:09 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:34.599 13:45:09 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:34.599 13:45:09 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=416427 00:33:34.599 13:45:09 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:34.599 13:45:09 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 416427 00:33:35.974 0 00:33:35.974 13:45:10 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:35.974 [2024-07-13 13:45:02.897478] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:35.975 [2024-07-13 13:45:02.897642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415128 ] 00:33:35.975 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.975 [2024-07-13 13:45:03.032577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.975 [2024-07-13 13:45:03.266615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:35.975 [2024-07-13 13:45:05.980195] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:35.975 [2024-07-13 13:45:05.980320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:35.975 [2024-07-13 13:45:05.980368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:35.975 [2024-07-13 13:45:05.980398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:35.975 [2024-07-13 13:45:05.980420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:35.975 [2024-07-13 13:45:05.980443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:35.975 [2024-07-13 13:45:05.980464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:35.975 [2024-07-13 13:45:05.980486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:35.975 [2024-07-13 13:45:05.980506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:35.975 [2024-07-13 13:45:05.980536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:35.975 [2024-07-13 13:45:05.980627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:35.975 [2024-07-13 13:45:05.980696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:35.975 [2024-07-13 13:45:06.030412] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:35.975 Running I/O for 1 seconds... 00:33:35.975 00:33:35.975 Latency(us) 00:33:35.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.975 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:35.975 Verification LBA range: start 0x0 length 0x4000 00:33:35.975 NVMe0n1 : 1.02 6070.70 23.71 0.00 0.00 20990.60 4199.16 18544.26 00:33:35.975 =================================================================================================================== 00:33:35.975 Total : 6070.70 23.71 0.00 0.00 20990.60 4199.16 18544.26 00:33:35.975 13:45:10 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:35.975 13:45:10 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:35.975 13:45:10 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:36.233 13:45:10 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:36.233 13:45:10 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:36.491 13:45:11 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:36.749 13:45:11 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:40.025 13:45:14 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:40.025 13:45:14 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:40.025 13:45:14 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 415128 00:33:40.025 13:45:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 415128 ']' 00:33:40.025 13:45:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 415128 00:33:40.025 13:45:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:33:40.025 13:45:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:40.025 13:45:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 415128 00:33:40.025 13:45:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:40.025 13:45:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:40.025 13:45:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 415128' 00:33:40.025 killing process with pid 415128 00:33:40.025 13:45:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 415128 00:33:40.025 13:45:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 415128 00:33:40.957 13:45:15 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:40.957 13:45:15 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:41.214 13:45:15 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:41.214 13:45:15 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:41.214 13:45:15 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:41.214 13:45:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:41.214 13:45:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:33:41.214 13:45:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:41.214 13:45:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:33:41.214 13:45:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:41.214 13:45:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:41.214 rmmod nvme_tcp 00:33:41.214 rmmod nvme_fabrics 00:33:41.472 rmmod nvme_keyring 00:33:41.472 13:45:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:41.472 13:45:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:33:41.472 13:45:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:33:41.472 13:45:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 412608 ']' 00:33:41.472 13:45:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 412608 00:33:41.472 13:45:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 412608 ']' 00:33:41.472 13:45:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 412608 00:33:41.472 13:45:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:33:41.472 13:45:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:41.472 13:45:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 412608 00:33:41.472 13:45:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:41.472 13:45:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:41.472 13:45:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 412608' 00:33:41.472 killing process with pid 412608 00:33:41.472 13:45:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 412608 00:33:41.472 13:45:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 412608 00:33:42.848 13:45:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:42.848 13:45:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:42.848 13:45:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:42.848 13:45:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:42.848 13:45:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:42.848 13:45:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.848 13:45:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:42.848 13:45:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.751 13:45:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:44.751 00:33:44.751 real 0m39.720s 00:33:44.751 user 2m18.566s 00:33:44.751 sys 0m6.100s 00:33:44.751 13:45:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:44.751 13:45:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:44.751 ************************************ 00:33:44.751 END TEST nvmf_failover 00:33:44.751 ************************************ 00:33:45.009 13:45:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:45.009 13:45:19 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:45.009 13:45:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:45.009 13:45:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:45.010 13:45:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:45.010 ************************************ 00:33:45.010 START TEST nvmf_host_discovery 00:33:45.010 ************************************ 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:45.010 * Looking for test storage... 00:33:45.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:33:45.010 13:45:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:46.913 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:46.913 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:46.913 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:46.913 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:46.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:33:46.913 00:33:46.913 --- 10.0.0.2 ping statistics --- 00:33:46.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.913 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:33:46.913 00:33:46.913 --- 10.0.0.1 ping statistics --- 00:33:46.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.913 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=419279 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 419279 00:33:46.913 13:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 419279 ']' 00:33:46.914 13:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.914 13:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:46.914 13:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.914 13:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:46.914 13:45:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.173 [2024-07-13 13:45:21.696217] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:47.173 [2024-07-13 13:45:21.696364] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.173 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.173 [2024-07-13 13:45:21.830381] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.431 [2024-07-13 13:45:22.053024] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.431 [2024-07-13 13:45:22.053094] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.431 [2024-07-13 13:45:22.053133] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.431 [2024-07-13 13:45:22.053154] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.431 [2024-07-13 13:45:22.053186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.431 [2024-07-13 13:45:22.053229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.999 [2024-07-13 13:45:22.673506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.999 [2024-07-13 13:45:22.681712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.999 null0 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.999 null1 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.999 13:45:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:48.000 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.000 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.000 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.000 13:45:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=419436 00:33:48.000 13:45:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:48.000 13:45:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 419436 /tmp/host.sock 00:33:48.000 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 419436 ']' 00:33:48.000 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:33:48.000 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:48.000 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:48.000 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:48.000 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:48.000 13:45:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.257 [2024-07-13 13:45:22.803942] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:48.257 [2024-07-13 13:45:22.804098] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419436 ] 00:33:48.257 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.257 [2024-07-13 13:45:22.949321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.515 [2024-07-13 13:45:23.202092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.081 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:49.082 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:49.082 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.082 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:49.082 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:49.082 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.082 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.082 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.082 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:49.082 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:49.082 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.082 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:49.082 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.082 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:49.082 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:49.082 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.341 [2024-07-13 13:45:23.965402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:49.341 13:45:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.341 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:33:49.599 13:45:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:33:50.164 [2024-07-13 13:45:24.766056] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:50.164 [2024-07-13 13:45:24.766093] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:50.164 [2024-07-13 13:45:24.766149] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:50.164 [2024-07-13 13:45:24.852509] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:50.422 [2024-07-13 13:45:24.956584] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:50.422 [2024-07-13 13:45:24.956632] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:50.422 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.422 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:50.422 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:50.422 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:50.422 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.422 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.422 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:50.422 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:50.422 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:50.422 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:50.681 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.682 [2024-07-13 13:45:25.414880] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:50.682 [2024-07-13 13:45:25.415705] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:50.682 [2024-07-13 13:45:25.415771] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:50.682 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:50.940 13:45:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:33:50.940 [2024-07-13 13:45:25.544891] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:50.940 [2024-07-13 13:45:25.608703] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:50.940 [2024-07-13 13:45:25.608740] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:50.940 [2024-07-13 13:45:25.608759] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:51.903 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.162 [2024-07-13 13:45:26.631207] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:52.162 [2024-07-13 13:45:26.631281] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:52.162 [2024-07-13 13:45:26.637419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.162 [2024-07-13 13:45:26.637476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.162 [2024-07-13 13:45:26.637504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.162 [2024-07-13 13:45:26.637541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.162 [2024-07-13 13:45:26.637579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:52.162 [2024-07-13 13:45:26.637601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.162 [2024-07-13 13:45:26.637625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.162 [2024-07-13 13:45:26.637644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:52.162 [2024-07-13 13:45:26.637663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:52.162 [2024-07-13 13:45:26.647405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.162 [2024-07-13 13:45:26.657453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:52.162 [2024-07-13 13:45:26.657763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.162 [2024-07-13 13:45:26.657808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:52.162 [2024-07-13 13:45:26.657835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:52.162 [2024-07-13 13:45:26.657892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:52.162 [2024-07-13 13:45:26.657943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:52.162 [2024-07-13 13:45:26.657967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:52.162 [2024-07-13 13:45:26.657989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:52.162 [2024-07-13 13:45:26.658027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.162 [2024-07-13 13:45:26.667572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:52.162 [2024-07-13 13:45:26.667846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.162 [2024-07-13 13:45:26.667909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:52.162 [2024-07-13 13:45:26.667935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:52.162 [2024-07-13 13:45:26.667968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:52.162 [2024-07-13 13:45:26.667998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:52.162 [2024-07-13 13:45:26.668034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:52.162 [2024-07-13 13:45:26.668052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:52.162 [2024-07-13 13:45:26.668080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:52.162 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:52.162 [2024-07-13 13:45:26.677682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:52.162 [2024-07-13 13:45:26.678021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.163 [2024-07-13 13:45:26.678060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:52.163 [2024-07-13 13:45:26.678084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:52.163 [2024-07-13 13:45:26.678116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:52.163 [2024-07-13 13:45:26.678157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:52.163 [2024-07-13 13:45:26.678178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:52.163 [2024-07-13 13:45:26.678197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:52.163 [2024-07-13 13:45:26.678241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.163 [2024-07-13 13:45:26.687796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:52.163 [2024-07-13 13:45:26.688077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.163 [2024-07-13 13:45:26.688115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:52.163 [2024-07-13 13:45:26.688138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:52.163 [2024-07-13 13:45:26.688189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:52.163 [2024-07-13 13:45:26.688245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:52.163 [2024-07-13 13:45:26.688273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:52.163 [2024-07-13 13:45:26.688294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:52.163 [2024-07-13 13:45:26.688325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.163 [2024-07-13 13:45:26.697925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:52.163 [2024-07-13 13:45:26.698144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.163 [2024-07-13 13:45:26.698200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:52.163 [2024-07-13 13:45:26.698227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:52.163 [2024-07-13 13:45:26.698262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:52.163 [2024-07-13 13:45:26.698316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:52.163 [2024-07-13 13:45:26.698345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:52.163 [2024-07-13 13:45:26.698366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:52.163 [2024-07-13 13:45:26.698396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.163 [2024-07-13 13:45:26.708031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:52.163 [2024-07-13 13:45:26.708382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.163 [2024-07-13 13:45:26.708422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:52.163 [2024-07-13 13:45:26.708447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:52.163 [2024-07-13 13:45:26.708481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:52.163 [2024-07-13 13:45:26.708533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:52.163 [2024-07-13 13:45:26.708561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:52.163 [2024-07-13 13:45:26.708582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:52.163 [2024-07-13 13:45:26.708613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:52.163 [2024-07-13 13:45:26.717598] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:52.163 [2024-07-13 13:45:26.717645] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:52.163 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:52.164 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:52.164 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:52.164 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:52.164 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:52.164 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:52.164 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.164 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.421 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.421 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:52.421 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:52.421 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:52.421 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:52.421 13:45:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:52.421 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.421 13:45:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.354 [2024-07-13 13:45:27.953224] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:53.354 [2024-07-13 13:45:27.953276] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:53.354 [2024-07-13 13:45:27.953323] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:53.354 [2024-07-13 13:45:28.079770] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:53.919 [2024-07-13 13:45:28.393167] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:53.919 [2024-07-13 13:45:28.393280] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.919 request: 00:33:53.919 { 00:33:53.919 "name": "nvme", 00:33:53.919 "trtype": "tcp", 00:33:53.919 "traddr": "10.0.0.2", 00:33:53.919 "adrfam": "ipv4", 00:33:53.919 "trsvcid": "8009", 00:33:53.919 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:53.919 "wait_for_attach": true, 00:33:53.919 "method": "bdev_nvme_start_discovery", 00:33:53.919 "req_id": 1 00:33:53.919 } 00:33:53.919 Got JSON-RPC error response 00:33:53.919 response: 00:33:53.919 { 00:33:53.919 "code": -17, 00:33:53.919 "message": "File exists" 00:33:53.919 } 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.919 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.919 request: 00:33:53.919 { 00:33:53.919 "name": "nvme_second", 00:33:53.919 "trtype": "tcp", 00:33:53.919 "traddr": "10.0.0.2", 00:33:53.919 "adrfam": "ipv4", 00:33:53.919 "trsvcid": "8009", 00:33:53.919 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:53.919 "wait_for_attach": true, 00:33:53.919 "method": "bdev_nvme_start_discovery", 00:33:53.919 "req_id": 1 00:33:53.919 } 00:33:53.919 Got JSON-RPC error response 00:33:53.919 response: 00:33:53.919 { 00:33:53.919 "code": -17, 00:33:53.919 "message": "File exists" 00:33:53.920 } 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.920 13:45:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.851 [2024-07-13 13:45:29.597256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.852 [2024-07-13 13:45:29.597342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=8010 00:33:55.110 [2024-07-13 13:45:29.597427] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:55.110 [2024-07-13 13:45:29.597453] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:55.110 [2024-07-13 13:45:29.597475] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:56.043 [2024-07-13 13:45:30.599682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.043 [2024-07-13 13:45:30.599767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3680 with addr=10.0.0.2, port=8010 00:33:56.043 [2024-07-13 13:45:30.599857] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:56.043 [2024-07-13 13:45:30.599895] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:56.043 [2024-07-13 13:45:30.599931] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:56.978 [2024-07-13 13:45:31.601641] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:56.978 request: 00:33:56.978 { 00:33:56.978 "name": "nvme_second", 00:33:56.978 "trtype": "tcp", 00:33:56.978 "traddr": "10.0.0.2", 00:33:56.978 "adrfam": "ipv4", 00:33:56.978 "trsvcid": "8010", 00:33:56.978 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:56.978 "wait_for_attach": false, 00:33:56.978 "attach_timeout_ms": 3000, 00:33:56.978 "method": "bdev_nvme_start_discovery", 00:33:56.978 "req_id": 1 00:33:56.978 } 00:33:56.978 Got JSON-RPC error response 00:33:56.978 response: 00:33:56.978 { 00:33:56.978 "code": -110, 00:33:56.978 "message": "Connection timed out" 00:33:56.978 } 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 419436 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:56.978 rmmod nvme_tcp 00:33:56.978 rmmod nvme_fabrics 00:33:56.978 rmmod nvme_keyring 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 419279 ']' 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 419279 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 419279 ']' 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 419279 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:33:56.978 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:57.236 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 419279 00:33:57.236 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:57.236 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:57.236 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 419279' 00:33:57.236 killing process with pid 419279 00:33:57.236 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 419279 00:33:57.236 13:45:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 419279 00:33:58.610 13:45:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:58.610 13:45:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:58.610 13:45:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:58.610 13:45:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:58.610 13:45:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:58.610 13:45:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.610 13:45:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:58.610 13:45:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.513 13:45:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:00.513 00:34:00.513 real 0m15.572s 00:34:00.513 user 0m23.265s 00:34:00.513 sys 0m2.984s 00:34:00.513 13:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:00.513 13:45:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:00.513 ************************************ 00:34:00.513 END TEST nvmf_host_discovery 00:34:00.513 ************************************ 00:34:00.513 13:45:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:00.513 13:45:35 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:00.513 13:45:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:00.513 13:45:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:00.513 13:45:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:00.513 ************************************ 00:34:00.513 START TEST nvmf_host_multipath_status 00:34:00.513 ************************************ 00:34:00.513 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:00.513 * Looking for test storage... 00:34:00.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:00.513 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:00.513 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:00.513 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:00.513 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:00.513 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:00.513 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:00.513 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:34:00.514 13:45:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:03.046 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:03.046 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:03.046 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:03.046 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:03.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:03.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:34:03.046 00:34:03.046 --- 10.0.0.2 ping statistics --- 00:34:03.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.046 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:03.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:03.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:34:03.046 00:34:03.046 --- 10.0.0.1 ping statistics --- 00:34:03.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.046 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:34:03.046 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=422720 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 422720 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 422720 ']' 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:03.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:03.047 13:45:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:03.047 [2024-07-13 13:45:37.420535] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:03.047 [2024-07-13 13:45:37.420653] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:03.047 EAL: No free 2048 kB hugepages reported on node 1 00:34:03.047 [2024-07-13 13:45:37.551798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:03.047 [2024-07-13 13:45:37.774465] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:03.047 [2024-07-13 13:45:37.774532] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:03.047 [2024-07-13 13:45:37.774559] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:03.047 [2024-07-13 13:45:37.774575] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:03.047 [2024-07-13 13:45:37.774592] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:03.047 [2024-07-13 13:45:37.774703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:03.047 [2024-07-13 13:45:37.774711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:03.981 13:45:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:03.981 13:45:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:34:03.981 13:45:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:03.981 13:45:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:03.981 13:45:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:03.981 13:45:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:03.981 13:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=422720 00:34:03.981 13:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:03.981 [2024-07-13 13:45:38.658292] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:03.981 13:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:04.548 Malloc0 00:34:04.548 13:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:04.806 13:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:05.065 13:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:05.323 [2024-07-13 13:45:39.822932] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:05.323 13:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:05.581 [2024-07-13 13:45:40.075742] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:05.581 13:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=423015 00:34:05.581 13:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:05.581 13:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:05.581 13:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 423015 /var/tmp/bdevperf.sock 00:34:05.581 13:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 423015 ']' 00:34:05.581 13:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:05.581 13:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:05.581 13:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:05.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:05.581 13:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:05.581 13:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:06.512 13:45:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:06.512 13:45:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:34:06.512 13:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:06.769 13:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:34:07.384 Nvme0n1 00:34:07.384 13:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:07.642 Nvme0n1 00:34:07.642 13:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:07.642 13:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:09.540 13:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:09.541 13:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:09.797 13:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:10.054 13:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:11.429 13:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:11.429 13:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:11.429 13:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.429 13:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:11.429 13:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.429 13:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:11.429 13:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.429 13:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:11.687 13:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:11.687 13:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:11.688 13:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.688 13:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:11.946 13:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.946 13:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:11.946 13:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.946 13:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:12.204 13:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:12.204 13:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:12.204 13:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.204 13:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:12.462 13:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:12.462 13:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:12.462 13:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.462 13:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:12.720 13:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:12.720 13:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:12.720 13:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:12.979 13:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:13.238 13:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:14.173 13:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:14.173 13:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:14.173 13:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.173 13:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:14.430 13:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:14.430 13:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:14.430 13:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.430 13:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:14.687 13:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.687 13:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:14.687 13:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.687 13:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:14.945 13:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.945 13:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:14.945 13:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.945 13:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:15.203 13:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.203 13:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:15.203 13:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.203 13:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:15.460 13:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.460 13:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:15.460 13:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.460 13:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:15.718 13:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.718 13:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:15.718 13:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:15.976 13:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:16.233 13:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:17.167 13:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:17.167 13:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:17.167 13:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.167 13:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:17.425 13:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.425 13:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:17.425 13:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.425 13:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:17.683 13:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:17.683 13:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:17.683 13:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.683 13:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:17.941 13:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.941 13:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:17.941 13:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.941 13:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:18.198 13:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.198 13:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:18.198 13:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.198 13:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:18.455 13:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.455 13:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:18.455 13:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.455 13:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:18.712 13:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.712 13:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:18.712 13:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:18.970 13:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:19.228 13:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:20.161 13:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:20.161 13:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:20.161 13:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.161 13:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:20.419 13:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.419 13:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:20.419 13:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.419 13:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:20.677 13:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:20.677 13:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:20.677 13:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.677 13:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:20.935 13:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.935 13:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:20.935 13:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.935 13:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:21.200 13:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.200 13:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:21.200 13:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.200 13:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:21.482 13:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.482 13:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:21.482 13:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.482 13:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:21.740 13:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:21.740 13:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:21.740 13:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:21.997 13:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:22.254 13:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:23.187 13:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:23.187 13:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:23.187 13:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.187 13:45:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:23.444 13:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:23.444 13:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:23.444 13:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.444 13:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:23.702 13:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:23.702 13:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:23.702 13:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.702 13:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:23.961 13:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:23.961 13:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:23.961 13:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.961 13:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:24.219 13:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.219 13:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:24.219 13:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.219 13:45:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:24.477 13:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:24.477 13:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:24.477 13:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.477 13:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:24.735 13:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:24.735 13:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:24.735 13:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:24.993 13:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:25.251 13:45:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:26.184 13:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:26.184 13:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:26.184 13:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.184 13:46:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:26.443 13:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:26.443 13:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:26.443 13:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.443 13:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:26.701 13:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.701 13:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:26.701 13:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.701 13:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:26.959 13:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.959 13:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:26.959 13:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.959 13:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:27.216 13:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.216 13:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:27.216 13:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.216 13:46:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:27.474 13:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:27.474 13:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:27.474 13:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.474 13:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:27.732 13:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.732 13:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:27.990 13:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:27.990 13:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:28.253 13:46:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:28.511 13:46:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:29.444 13:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:29.444 13:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:29.444 13:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.444 13:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:29.701 13:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.701 13:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:29.701 13:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.701 13:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:29.958 13:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.958 13:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:29.958 13:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.958 13:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:30.215 13:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.215 13:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:30.215 13:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.215 13:46:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:30.473 13:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.473 13:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:30.473 13:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.473 13:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:30.730 13:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.730 13:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:30.730 13:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.730 13:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:30.988 13:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.988 13:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:30.988 13:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:31.246 13:46:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:31.504 13:46:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:32.437 13:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:32.437 13:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:32.437 13:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.437 13:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:32.695 13:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:32.695 13:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:32.695 13:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.695 13:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:32.953 13:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.953 13:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:32.953 13:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.953 13:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:33.211 13:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.211 13:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:33.211 13:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.211 13:46:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:33.470 13:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.470 13:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:33.470 13:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.470 13:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:33.728 13:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.728 13:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:33.728 13:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.728 13:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:33.987 13:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.987 13:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:33.987 13:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:34.245 13:46:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:34.533 13:46:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:35.473 13:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:35.473 13:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:35.473 13:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.473 13:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:35.731 13:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.731 13:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:35.731 13:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.731 13:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:35.988 13:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.988 13:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:35.988 13:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.988 13:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:36.246 13:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.246 13:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:36.246 13:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.246 13:46:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:36.503 13:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.503 13:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:36.503 13:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.503 13:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:36.761 13:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.761 13:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:36.761 13:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.761 13:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:37.019 13:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.019 13:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:37.020 13:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:37.277 13:46:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:37.535 13:46:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:38.469 13:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:38.469 13:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:38.469 13:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.469 13:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:38.726 13:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.726 13:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:38.726 13:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.726 13:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:38.983 13:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:38.983 13:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:38.983 13:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.983 13:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:39.240 13:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.240 13:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:39.240 13:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.240 13:46:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:39.497 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.497 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:39.497 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.497 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:39.754 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.754 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:39.754 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.754 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:40.012 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:40.012 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 423015 00:34:40.012 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 423015 ']' 00:34:40.012 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 423015 00:34:40.012 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:34:40.012 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:40.012 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 423015 00:34:40.012 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:34:40.012 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:34:40.012 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 423015' 00:34:40.012 killing process with pid 423015 00:34:40.012 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 423015 00:34:40.012 13:46:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 423015 00:34:40.577 Connection closed with partial response: 00:34:40.577 00:34:40.577 00:34:41.145 13:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 423015 00:34:41.145 13:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:41.145 [2024-07-13 13:45:40.171654] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:41.145 [2024-07-13 13:45:40.171813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423015 ] 00:34:41.145 EAL: No free 2048 kB hugepages reported on node 1 00:34:41.145 [2024-07-13 13:45:40.297514] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:41.145 [2024-07-13 13:45:40.526543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:41.145 Running I/O for 90 seconds... 00:34:41.145 [2024-07-13 13:45:56.612333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.145 [2024-07-13 13:45:56.612456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:41.145 [2024-07-13 13:45:56.612563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.145 [2024-07-13 13:45:56.612593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:41.145 [2024-07-13 13:45:56.612653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.145 [2024-07-13 13:45:56.612679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:41.145 [2024-07-13 13:45:56.612715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.145 [2024-07-13 13:45:56.612740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:41.145 [2024-07-13 13:45:56.612776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.145 [2024-07-13 13:45:56.612801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:41.145 [2024-07-13 13:45:56.612836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.145 [2024-07-13 13:45:56.612886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:41.145 [2024-07-13 13:45:56.612926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.145 [2024-07-13 13:45:56.612952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.612989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.613014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.613050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.613076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.614703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.614742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.614793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.614831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.614883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.614910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.614950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.614977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.615017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.615043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.615081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.615108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.615163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.615189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.615229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.615254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.615292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.615317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.615355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.615381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.615418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.615444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.615482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.615507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.615545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.615571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.615609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.615634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.615678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.615704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.615742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.615766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.615803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.615830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.615891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.615918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.615957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.615984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.616023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.616048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.616088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.616114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.616153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.616193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.616231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.616259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.616298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.616324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.616361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.616386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.616423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.616449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.616492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.616518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.616556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.616581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.616618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.616644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.616682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.616706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.616743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.616767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.616805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.616831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.617011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.617042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.617090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.617117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.617160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.617187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.617229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.617254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.617296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.617338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.617381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.617406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.617447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.146 [2024-07-13 13:45:56.617478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:41.146 [2024-07-13 13:45:56.617520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.617546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.617586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.617611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.617652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.617677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.617719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.617744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.617786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.617814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.617883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.617913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.617956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.617983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.618025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.618052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.618095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.618122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.618180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.618206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.618247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.618273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.618314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.618345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.618387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.618412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.618452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.618477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.618518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.618543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.618583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.618609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.618650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.618675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.618715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.618741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.618796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.618822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.618886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.618916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.618958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.618984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.619026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.619051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.619091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.619117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.619173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.619199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.619245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.147 [2024-07-13 13:45:56.619271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.619311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.147 [2024-07-13 13:45:56.619336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.619376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.619403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.619444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.619471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.619511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.619537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.619578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.619602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.619641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.619666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.619705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.619732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.619771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.619796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.619834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.619886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.619930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.619957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.619998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.620025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.620075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.620102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.620143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.620183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.620223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.620249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.620289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.620315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.620355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.620380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:41.147 [2024-07-13 13:45:56.620420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.147 [2024-07-13 13:45:56.620446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:45:56.620622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:45:56.620652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:45:56.620701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:45:56.620728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:45:56.620773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:45:56.620800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:45:56.620844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:45:56.620892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:45:56.620942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:45:56.620970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:45:56.621014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:45:56.621042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:45:56.621087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:45:56.621118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:45:56.621165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:45:56.621207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:45:56.621253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:45:56.621279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:45:56.621323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:45:56.621349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:45:56.621393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:45:56.621419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:45:56.621463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:45:56.621488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:45:56.621531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:45:56.621557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:45:56.621601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:45:56.621628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.180213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.180320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.180425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.180456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.180503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:46:12.180546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.180585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:46:12.180610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.180647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:46:12.180680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.180717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:46:12.180742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.180778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:46:12.180802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.180838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.180863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.180929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.180955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.180991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.181033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.181072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.181097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.181133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.181158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.182003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.182038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.183083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:46:12.183117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.183157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.183199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.183235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.183260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.183295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.183319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.183361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.183386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.183422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.183446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.183480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.183504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.183538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.183563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.183597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.183621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.183656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.183680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.183715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.148 [2024-07-13 13:46:12.183739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.183773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:46:12.183797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.183833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:46:12.183858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.183921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:46:12.183947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:41.148 [2024-07-13 13:46:12.183983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.148 [2024-07-13 13:46:12.184009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.184045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.184070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.184111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.184138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.184174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.184215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.184250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.184274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.184308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.184332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.184368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.184393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.184428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.184452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.184486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.184510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.184544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.184569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.184604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.184628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.184662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.184686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.184720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.184746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.184781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.184806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.184848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.184898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.184938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.184964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.185001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.149 [2024-07-13 13:46:12.185026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.185062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.149 [2024-07-13 13:46:12.185087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.185123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.149 [2024-07-13 13:46:12.185148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.185198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.149 [2024-07-13 13:46:12.185224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.185260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.149 [2024-07-13 13:46:12.185284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.185320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.149 [2024-07-13 13:46:12.185344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.185378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.149 [2024-07-13 13:46:12.185404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.185438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.185462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.185497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.185521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.185556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.185582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.186492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.186531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.186578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.186606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.186658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.186685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.186721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.186745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.186780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.149 [2024-07-13 13:46:12.186806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.186842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.149 [2024-07-13 13:46:12.186888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.186930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.149 [2024-07-13 13:46:12.186956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.186993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.149 [2024-07-13 13:46:12.187018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:41.149 [2024-07-13 13:46:12.187056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.149 [2024-07-13 13:46:12.187081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:41.150 Received shutdown signal, test time was about 32.308881 seconds 00:34:41.150 00:34:41.150 Latency(us) 00:34:41.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.150 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:41.150 Verification LBA range: start 0x0 length 0x4000 00:34:41.150 Nvme0n1 : 32.31 5764.58 22.52 0.00 0.00 22167.09 301.89 4026531.84 00:34:41.150 =================================================================================================================== 00:34:41.150 Total : 5764.58 22.52 0.00 0.00 22167.09 301.89 4026531.84 00:34:41.150 13:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:41.408 13:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:41.408 13:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:41.408 13:46:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:41.408 13:46:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:41.408 13:46:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:34:41.408 13:46:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:41.408 13:46:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:34:41.408 13:46:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:41.408 13:46:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:41.408 rmmod nvme_tcp 00:34:41.408 rmmod nvme_fabrics 00:34:41.408 rmmod nvme_keyring 00:34:41.408 13:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:41.408 13:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:34:41.408 13:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:34:41.408 13:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 422720 ']' 00:34:41.408 13:46:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 422720 00:34:41.408 13:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 422720 ']' 00:34:41.408 13:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 422720 00:34:41.408 13:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:34:41.408 13:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:41.408 13:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 422720 00:34:41.408 13:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:41.408 13:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:41.408 13:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 422720' 00:34:41.408 killing process with pid 422720 00:34:41.408 13:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 422720 00:34:41.408 13:46:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 422720 00:34:42.782 13:46:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:42.782 13:46:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:42.782 13:46:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:42.782 13:46:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:42.782 13:46:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:42.782 13:46:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.782 13:46:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:42.782 13:46:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.316 13:46:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:45.316 00:34:45.316 real 0m44.382s 00:34:45.316 user 2m4.952s 00:34:45.316 sys 0m12.922s 00:34:45.316 13:46:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:45.316 13:46:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:45.316 ************************************ 00:34:45.316 END TEST nvmf_host_multipath_status 00:34:45.316 ************************************ 00:34:45.316 13:46:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:45.316 13:46:19 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:45.316 13:46:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:45.316 13:46:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:45.316 13:46:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.316 ************************************ 00:34:45.316 START TEST nvmf_discovery_remove_ifc 00:34:45.316 ************************************ 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:45.316 * Looking for test storage... 00:34:45.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:34:45.316 13:46:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:47.218 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:47.218 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:47.218 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:47.219 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:47.219 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:47.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:47.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:34:47.219 00:34:47.219 --- 10.0.0.2 ping statistics --- 00:34:47.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.219 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:47.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:47.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:34:47.219 00:34:47.219 --- 10.0.0.1 ping statistics --- 00:34:47.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.219 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=429478 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 429478 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 429478 ']' 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:47.219 13:46:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:47.219 [2024-07-13 13:46:21.689816] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:47.219 [2024-07-13 13:46:21.689973] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:47.219 EAL: No free 2048 kB hugepages reported on node 1 00:34:47.219 [2024-07-13 13:46:21.835712] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.478 [2024-07-13 13:46:22.088782] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:47.478 [2024-07-13 13:46:22.088863] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:47.478 [2024-07-13 13:46:22.088914] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:47.478 [2024-07-13 13:46:22.088941] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:47.478 [2024-07-13 13:46:22.088963] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:47.478 [2024-07-13 13:46:22.089012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:48.045 [2024-07-13 13:46:22.627448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.045 [2024-07-13 13:46:22.635639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:48.045 null0 00:34:48.045 [2024-07-13 13:46:22.667557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=429632 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 429632 /tmp/host.sock 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 429632 ']' 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:48.045 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:48.045 13:46:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:48.045 [2024-07-13 13:46:22.771148] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:48.045 [2024-07-13 13:46:22.771301] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid429632 ] 00:34:48.302 EAL: No free 2048 kB hugepages reported on node 1 00:34:48.302 [2024-07-13 13:46:22.901731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:48.559 [2024-07-13 13:46:23.154138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:49.123 13:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:49.123 13:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:34:49.123 13:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:49.123 13:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:49.123 13:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.123 13:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:49.123 13:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.123 13:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:49.123 13:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.123 13:46:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:49.386 13:46:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.386 13:46:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:49.386 13:46:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.386 13:46:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:50.388 [2024-07-13 13:46:25.070097] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:50.388 [2024-07-13 13:46:25.070135] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:50.388 [2024-07-13 13:46:25.070196] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:50.646 [2024-07-13 13:46:25.156509] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:50.646 [2024-07-13 13:46:25.221102] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:50.646 [2024-07-13 13:46:25.221205] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:50.646 [2024-07-13 13:46:25.221301] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:50.646 [2024-07-13 13:46:25.221349] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:50.646 [2024-07-13 13:46:25.221408] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:50.646 [2024-07-13 13:46:25.228078] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2780 was disconnected and freed. delete nvme_qpair. 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:50.646 13:46:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:52.016 13:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:52.016 13:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:52.016 13:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:52.016 13:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.016 13:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:52.016 13:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:52.016 13:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:52.016 13:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.016 13:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:52.016 13:46:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:52.949 13:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:52.949 13:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:52.949 13:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:52.949 13:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.949 13:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:52.949 13:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:52.949 13:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:52.949 13:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.949 13:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:52.949 13:46:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:53.882 13:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:53.882 13:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:53.882 13:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:53.882 13:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.882 13:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:53.882 13:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:53.882 13:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:53.882 13:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.883 13:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:53.883 13:46:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:54.817 13:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:54.817 13:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:54.817 13:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:54.817 13:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.817 13:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:54.817 13:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:54.817 13:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:54.817 13:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.817 13:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:54.817 13:46:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:56.192 13:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:56.192 13:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:56.192 13:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:56.192 13:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.192 13:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:56.192 13:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:56.192 13:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:56.192 13:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.192 13:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:56.192 13:46:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:56.192 [2024-07-13 13:46:30.662159] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:56.192 [2024-07-13 13:46:30.662265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.192 [2024-07-13 13:46:30.662298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.192 [2024-07-13 13:46:30.662327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.192 [2024-07-13 13:46:30.662349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.192 [2024-07-13 13:46:30.662372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.192 [2024-07-13 13:46:30.662394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.192 [2024-07-13 13:46:30.662416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.192 [2024-07-13 13:46:30.662438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.192 [2024-07-13 13:46:30.662461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.192 [2024-07-13 13:46:30.662483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.193 [2024-07-13 13:46:30.662505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:34:56.193 [2024-07-13 13:46:30.672169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:34:56.193 [2024-07-13 13:46:30.682237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:57.127 13:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:57.127 13:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:57.127 13:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.127 13:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:57.127 13:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:57.127 13:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:57.127 13:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:57.127 [2024-07-13 13:46:31.709909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:57.127 [2024-07-13 13:46:31.709996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:34:57.127 [2024-07-13 13:46:31.710034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:34:57.127 [2024-07-13 13:46:31.710097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:34:57.127 [2024-07-13 13:46:31.710812] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:57.127 [2024-07-13 13:46:31.710862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:57.127 [2024-07-13 13:46:31.710932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:57.127 [2024-07-13 13:46:31.710957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:57.127 [2024-07-13 13:46:31.711005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:57.127 [2024-07-13 13:46:31.711030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:57.127 13:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.127 13:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:57.127 13:46:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:58.060 [2024-07-13 13:46:32.713560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:58.060 [2024-07-13 13:46:32.713604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:58.060 [2024-07-13 13:46:32.713627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:58.060 [2024-07-13 13:46:32.713648] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:34:58.060 [2024-07-13 13:46:32.713683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:58.060 [2024-07-13 13:46:32.713742] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:58.060 [2024-07-13 13:46:32.713806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:58.061 [2024-07-13 13:46:32.713839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:58.061 [2024-07-13 13:46:32.713876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:58.061 [2024-07-13 13:46:32.713916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:58.061 [2024-07-13 13:46:32.713938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:58.061 [2024-07-13 13:46:32.713958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:58.061 [2024-07-13 13:46:32.713978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:58.061 [2024-07-13 13:46:32.713997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:58.061 [2024-07-13 13:46:32.714017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:58.061 [2024-07-13 13:46:32.714035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:58.061 [2024-07-13 13:46:32.714053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:34:58.061 [2024-07-13 13:46:32.714202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:34:58.061 [2024-07-13 13:46:32.715330] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:58.061 [2024-07-13 13:46:32.715364] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:34:58.061 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:58.061 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:58.061 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.061 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:58.061 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:58.061 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:58.061 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:58.061 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.061 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:58.061 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:58.061 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:58.318 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:58.318 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:58.318 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:58.318 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:58.318 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.318 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:58.318 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:58.318 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:58.318 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.318 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:58.318 13:46:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:59.252 13:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:59.252 13:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:59.252 13:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:59.252 13:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.252 13:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.252 13:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:59.252 13:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:59.252 13:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.252 13:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:59.252 13:46:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:00.186 [2024-07-13 13:46:34.726138] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:00.186 [2024-07-13 13:46:34.726202] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:00.186 [2024-07-13 13:46:34.726252] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:00.186 [2024-07-13 13:46:34.852693] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:00.186 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:00.186 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:00.186 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.186 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:00.186 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:00.186 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:00.186 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:00.186 [2024-07-13 13:46:34.918152] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:00.186 [2024-07-13 13:46:34.918243] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:00.186 [2024-07-13 13:46:34.918338] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:00.186 [2024-07-13 13:46:34.918382] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:00.186 [2024-07-13 13:46:34.918408] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:00.186 [2024-07-13 13:46:34.925021] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2f00 was disconnected and freed. delete nvme_qpair. 00:35:00.186 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.444 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:00.444 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:00.444 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 429632 00:35:00.444 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 429632 ']' 00:35:00.444 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 429632 00:35:00.444 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:35:00.444 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:00.444 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 429632 00:35:00.444 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:00.444 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:00.444 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 429632' 00:35:00.444 killing process with pid 429632 00:35:00.444 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 429632 00:35:00.445 13:46:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 429632 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:01.377 rmmod nvme_tcp 00:35:01.377 rmmod nvme_fabrics 00:35:01.377 rmmod nvme_keyring 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 429478 ']' 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 429478 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 429478 ']' 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 429478 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 429478 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 429478' 00:35:01.377 killing process with pid 429478 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 429478 00:35:01.377 13:46:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 429478 00:35:02.752 13:46:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:02.752 13:46:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:02.752 13:46:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:02.752 13:46:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:02.752 13:46:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:02.752 13:46:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.752 13:46:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:02.752 13:46:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.657 13:46:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:04.657 00:35:04.657 real 0m19.763s 00:35:04.657 user 0m28.645s 00:35:04.657 sys 0m3.090s 00:35:04.657 13:46:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:04.657 13:46:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:04.657 ************************************ 00:35:04.657 END TEST nvmf_discovery_remove_ifc 00:35:04.657 ************************************ 00:35:04.657 13:46:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:04.657 13:46:39 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:04.657 13:46:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:04.657 13:46:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:04.657 13:46:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:04.657 ************************************ 00:35:04.657 START TEST nvmf_identify_kernel_target 00:35:04.657 ************************************ 00:35:04.657 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:04.915 * Looking for test storage... 00:35:04.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:04.915 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:04.916 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:04.916 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:04.916 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:04.916 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:04.916 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:04.916 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:04.916 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.916 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:04.916 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.916 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:04.916 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:04.916 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:35:04.916 13:46:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:06.869 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:06.870 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:06.870 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:06.870 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:06.870 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:06.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:06.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:35:06.870 00:35:06.870 --- 10.0.0.2 ping statistics --- 00:35:06.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.870 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:06.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:06.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:35:06.870 00:35:06.870 --- 10.0.0.1 ping statistics --- 00:35:06.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.870 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:06.870 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:06.871 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:06.871 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:06.871 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:06.871 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:06.871 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:06.871 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:06.871 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:06.871 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:35:06.871 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:06.871 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:06.871 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:06.871 13:46:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:08.247 Waiting for block devices as requested 00:35:08.247 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:08.247 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:08.247 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:08.511 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:08.511 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:08.511 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:08.511 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:08.769 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:08.769 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:08.769 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:08.769 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:09.027 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:09.027 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:09.027 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:09.285 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:09.285 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:09.285 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:09.544 No valid GPT data, bailing 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:09.544 00:35:09.544 Discovery Log Number of Records 2, Generation counter 2 00:35:09.544 =====Discovery Log Entry 0====== 00:35:09.544 trtype: tcp 00:35:09.544 adrfam: ipv4 00:35:09.544 subtype: current discovery subsystem 00:35:09.544 treq: not specified, sq flow control disable supported 00:35:09.544 portid: 1 00:35:09.544 trsvcid: 4420 00:35:09.544 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:09.544 traddr: 10.0.0.1 00:35:09.544 eflags: none 00:35:09.544 sectype: none 00:35:09.544 =====Discovery Log Entry 1====== 00:35:09.544 trtype: tcp 00:35:09.544 adrfam: ipv4 00:35:09.544 subtype: nvme subsystem 00:35:09.544 treq: not specified, sq flow control disable supported 00:35:09.544 portid: 1 00:35:09.544 trsvcid: 4420 00:35:09.544 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:09.544 traddr: 10.0.0.1 00:35:09.544 eflags: none 00:35:09.544 sectype: none 00:35:09.544 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:09.544 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:09.544 EAL: No free 2048 kB hugepages reported on node 1 00:35:09.803 ===================================================== 00:35:09.803 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:09.803 ===================================================== 00:35:09.803 Controller Capabilities/Features 00:35:09.803 ================================ 00:35:09.803 Vendor ID: 0000 00:35:09.803 Subsystem Vendor ID: 0000 00:35:09.803 Serial Number: 899e4e71b2f2c07f8aa0 00:35:09.803 Model Number: Linux 00:35:09.803 Firmware Version: 6.7.0-68 00:35:09.803 Recommended Arb Burst: 0 00:35:09.803 IEEE OUI Identifier: 00 00 00 00:35:09.803 Multi-path I/O 00:35:09.803 May have multiple subsystem ports: No 00:35:09.803 May have multiple controllers: No 00:35:09.803 Associated with SR-IOV VF: No 00:35:09.803 Max Data Transfer Size: Unlimited 00:35:09.803 Max Number of Namespaces: 0 00:35:09.803 Max Number of I/O Queues: 1024 00:35:09.803 NVMe Specification Version (VS): 1.3 00:35:09.803 NVMe Specification Version (Identify): 1.3 00:35:09.803 Maximum Queue Entries: 1024 00:35:09.803 Contiguous Queues Required: No 00:35:09.803 Arbitration Mechanisms Supported 00:35:09.803 Weighted Round Robin: Not Supported 00:35:09.803 Vendor Specific: Not Supported 00:35:09.803 Reset Timeout: 7500 ms 00:35:09.803 Doorbell Stride: 4 bytes 00:35:09.803 NVM Subsystem Reset: Not Supported 00:35:09.803 Command Sets Supported 00:35:09.803 NVM Command Set: Supported 00:35:09.803 Boot Partition: Not Supported 00:35:09.803 Memory Page Size Minimum: 4096 bytes 00:35:09.803 Memory Page Size Maximum: 4096 bytes 00:35:09.803 Persistent Memory Region: Not Supported 00:35:09.803 Optional Asynchronous Events Supported 00:35:09.803 Namespace Attribute Notices: Not Supported 00:35:09.803 Firmware Activation Notices: Not Supported 00:35:09.803 ANA Change Notices: Not Supported 00:35:09.804 PLE Aggregate Log Change Notices: Not Supported 00:35:09.804 LBA Status Info Alert Notices: Not Supported 00:35:09.804 EGE Aggregate Log Change Notices: Not Supported 00:35:09.804 Normal NVM Subsystem Shutdown event: Not Supported 00:35:09.804 Zone Descriptor Change Notices: Not Supported 00:35:09.804 Discovery Log Change Notices: Supported 00:35:09.804 Controller Attributes 00:35:09.804 128-bit Host Identifier: Not Supported 00:35:09.804 Non-Operational Permissive Mode: Not Supported 00:35:09.804 NVM Sets: Not Supported 00:35:09.804 Read Recovery Levels: Not Supported 00:35:09.804 Endurance Groups: Not Supported 00:35:09.804 Predictable Latency Mode: Not Supported 00:35:09.804 Traffic Based Keep ALive: Not Supported 00:35:09.804 Namespace Granularity: Not Supported 00:35:09.804 SQ Associations: Not Supported 00:35:09.804 UUID List: Not Supported 00:35:09.804 Multi-Domain Subsystem: Not Supported 00:35:09.804 Fixed Capacity Management: Not Supported 00:35:09.804 Variable Capacity Management: Not Supported 00:35:09.804 Delete Endurance Group: Not Supported 00:35:09.804 Delete NVM Set: Not Supported 00:35:09.804 Extended LBA Formats Supported: Not Supported 00:35:09.804 Flexible Data Placement Supported: Not Supported 00:35:09.804 00:35:09.804 Controller Memory Buffer Support 00:35:09.804 ================================ 00:35:09.804 Supported: No 00:35:09.804 00:35:09.804 Persistent Memory Region Support 00:35:09.804 ================================ 00:35:09.804 Supported: No 00:35:09.804 00:35:09.804 Admin Command Set Attributes 00:35:09.804 ============================ 00:35:09.804 Security Send/Receive: Not Supported 00:35:09.804 Format NVM: Not Supported 00:35:09.804 Firmware Activate/Download: Not Supported 00:35:09.804 Namespace Management: Not Supported 00:35:09.804 Device Self-Test: Not Supported 00:35:09.804 Directives: Not Supported 00:35:09.804 NVMe-MI: Not Supported 00:35:09.804 Virtualization Management: Not Supported 00:35:09.804 Doorbell Buffer Config: Not Supported 00:35:09.804 Get LBA Status Capability: Not Supported 00:35:09.804 Command & Feature Lockdown Capability: Not Supported 00:35:09.804 Abort Command Limit: 1 00:35:09.804 Async Event Request Limit: 1 00:35:09.804 Number of Firmware Slots: N/A 00:35:09.804 Firmware Slot 1 Read-Only: N/A 00:35:09.804 Firmware Activation Without Reset: N/A 00:35:09.804 Multiple Update Detection Support: N/A 00:35:09.804 Firmware Update Granularity: No Information Provided 00:35:09.804 Per-Namespace SMART Log: No 00:35:09.804 Asymmetric Namespace Access Log Page: Not Supported 00:35:09.804 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:09.804 Command Effects Log Page: Not Supported 00:35:09.804 Get Log Page Extended Data: Supported 00:35:09.804 Telemetry Log Pages: Not Supported 00:35:09.804 Persistent Event Log Pages: Not Supported 00:35:09.804 Supported Log Pages Log Page: May Support 00:35:09.804 Commands Supported & Effects Log Page: Not Supported 00:35:09.804 Feature Identifiers & Effects Log Page:May Support 00:35:09.804 NVMe-MI Commands & Effects Log Page: May Support 00:35:09.804 Data Area 4 for Telemetry Log: Not Supported 00:35:09.804 Error Log Page Entries Supported: 1 00:35:09.804 Keep Alive: Not Supported 00:35:09.804 00:35:09.804 NVM Command Set Attributes 00:35:09.804 ========================== 00:35:09.804 Submission Queue Entry Size 00:35:09.804 Max: 1 00:35:09.804 Min: 1 00:35:09.804 Completion Queue Entry Size 00:35:09.804 Max: 1 00:35:09.804 Min: 1 00:35:09.804 Number of Namespaces: 0 00:35:09.804 Compare Command: Not Supported 00:35:09.804 Write Uncorrectable Command: Not Supported 00:35:09.804 Dataset Management Command: Not Supported 00:35:09.804 Write Zeroes Command: Not Supported 00:35:09.804 Set Features Save Field: Not Supported 00:35:09.804 Reservations: Not Supported 00:35:09.804 Timestamp: Not Supported 00:35:09.804 Copy: Not Supported 00:35:09.804 Volatile Write Cache: Not Present 00:35:09.804 Atomic Write Unit (Normal): 1 00:35:09.804 Atomic Write Unit (PFail): 1 00:35:09.804 Atomic Compare & Write Unit: 1 00:35:09.804 Fused Compare & Write: Not Supported 00:35:09.804 Scatter-Gather List 00:35:09.804 SGL Command Set: Supported 00:35:09.804 SGL Keyed: Not Supported 00:35:09.804 SGL Bit Bucket Descriptor: Not Supported 00:35:09.804 SGL Metadata Pointer: Not Supported 00:35:09.804 Oversized SGL: Not Supported 00:35:09.804 SGL Metadata Address: Not Supported 00:35:09.804 SGL Offset: Supported 00:35:09.804 Transport SGL Data Block: Not Supported 00:35:09.804 Replay Protected Memory Block: Not Supported 00:35:09.804 00:35:09.804 Firmware Slot Information 00:35:09.804 ========================= 00:35:09.804 Active slot: 0 00:35:09.804 00:35:09.804 00:35:09.804 Error Log 00:35:09.804 ========= 00:35:09.804 00:35:09.804 Active Namespaces 00:35:09.804 ================= 00:35:09.804 Discovery Log Page 00:35:09.804 ================== 00:35:09.804 Generation Counter: 2 00:35:09.804 Number of Records: 2 00:35:09.804 Record Format: 0 00:35:09.804 00:35:09.804 Discovery Log Entry 0 00:35:09.804 ---------------------- 00:35:09.804 Transport Type: 3 (TCP) 00:35:09.804 Address Family: 1 (IPv4) 00:35:09.804 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:09.804 Entry Flags: 00:35:09.804 Duplicate Returned Information: 0 00:35:09.804 Explicit Persistent Connection Support for Discovery: 0 00:35:09.804 Transport Requirements: 00:35:09.804 Secure Channel: Not Specified 00:35:09.804 Port ID: 1 (0x0001) 00:35:09.804 Controller ID: 65535 (0xffff) 00:35:09.804 Admin Max SQ Size: 32 00:35:09.804 Transport Service Identifier: 4420 00:35:09.804 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:09.804 Transport Address: 10.0.0.1 00:35:09.804 Discovery Log Entry 1 00:35:09.804 ---------------------- 00:35:09.804 Transport Type: 3 (TCP) 00:35:09.804 Address Family: 1 (IPv4) 00:35:09.804 Subsystem Type: 2 (NVM Subsystem) 00:35:09.804 Entry Flags: 00:35:09.804 Duplicate Returned Information: 0 00:35:09.804 Explicit Persistent Connection Support for Discovery: 0 00:35:09.804 Transport Requirements: 00:35:09.804 Secure Channel: Not Specified 00:35:09.804 Port ID: 1 (0x0001) 00:35:09.804 Controller ID: 65535 (0xffff) 00:35:09.804 Admin Max SQ Size: 32 00:35:09.804 Transport Service Identifier: 4420 00:35:09.804 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:09.804 Transport Address: 10.0.0.1 00:35:09.804 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:09.804 EAL: No free 2048 kB hugepages reported on node 1 00:35:09.804 get_feature(0x01) failed 00:35:09.804 get_feature(0x02) failed 00:35:09.804 get_feature(0x04) failed 00:35:09.804 ===================================================== 00:35:09.804 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:09.804 ===================================================== 00:35:09.804 Controller Capabilities/Features 00:35:09.804 ================================ 00:35:09.804 Vendor ID: 0000 00:35:09.804 Subsystem Vendor ID: 0000 00:35:09.804 Serial Number: 90974f93cbd8bbdf81f6 00:35:09.804 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:09.804 Firmware Version: 6.7.0-68 00:35:09.804 Recommended Arb Burst: 6 00:35:09.804 IEEE OUI Identifier: 00 00 00 00:35:09.804 Multi-path I/O 00:35:09.804 May have multiple subsystem ports: Yes 00:35:09.804 May have multiple controllers: Yes 00:35:09.804 Associated with SR-IOV VF: No 00:35:09.804 Max Data Transfer Size: Unlimited 00:35:09.804 Max Number of Namespaces: 1024 00:35:09.804 Max Number of I/O Queues: 128 00:35:09.804 NVMe Specification Version (VS): 1.3 00:35:09.804 NVMe Specification Version (Identify): 1.3 00:35:09.804 Maximum Queue Entries: 1024 00:35:09.804 Contiguous Queues Required: No 00:35:09.804 Arbitration Mechanisms Supported 00:35:09.804 Weighted Round Robin: Not Supported 00:35:09.804 Vendor Specific: Not Supported 00:35:09.804 Reset Timeout: 7500 ms 00:35:09.804 Doorbell Stride: 4 bytes 00:35:09.804 NVM Subsystem Reset: Not Supported 00:35:09.804 Command Sets Supported 00:35:09.804 NVM Command Set: Supported 00:35:09.804 Boot Partition: Not Supported 00:35:09.804 Memory Page Size Minimum: 4096 bytes 00:35:09.804 Memory Page Size Maximum: 4096 bytes 00:35:09.804 Persistent Memory Region: Not Supported 00:35:09.804 Optional Asynchronous Events Supported 00:35:09.804 Namespace Attribute Notices: Supported 00:35:09.804 Firmware Activation Notices: Not Supported 00:35:09.804 ANA Change Notices: Supported 00:35:09.804 PLE Aggregate Log Change Notices: Not Supported 00:35:09.804 LBA Status Info Alert Notices: Not Supported 00:35:09.804 EGE Aggregate Log Change Notices: Not Supported 00:35:09.804 Normal NVM Subsystem Shutdown event: Not Supported 00:35:09.804 Zone Descriptor Change Notices: Not Supported 00:35:09.804 Discovery Log Change Notices: Not Supported 00:35:09.804 Controller Attributes 00:35:09.804 128-bit Host Identifier: Supported 00:35:09.804 Non-Operational Permissive Mode: Not Supported 00:35:09.804 NVM Sets: Not Supported 00:35:09.804 Read Recovery Levels: Not Supported 00:35:09.804 Endurance Groups: Not Supported 00:35:09.804 Predictable Latency Mode: Not Supported 00:35:09.805 Traffic Based Keep ALive: Supported 00:35:09.805 Namespace Granularity: Not Supported 00:35:09.805 SQ Associations: Not Supported 00:35:09.805 UUID List: Not Supported 00:35:09.805 Multi-Domain Subsystem: Not Supported 00:35:09.805 Fixed Capacity Management: Not Supported 00:35:09.805 Variable Capacity Management: Not Supported 00:35:09.805 Delete Endurance Group: Not Supported 00:35:09.805 Delete NVM Set: Not Supported 00:35:09.805 Extended LBA Formats Supported: Not Supported 00:35:09.805 Flexible Data Placement Supported: Not Supported 00:35:09.805 00:35:09.805 Controller Memory Buffer Support 00:35:09.805 ================================ 00:35:09.805 Supported: No 00:35:09.805 00:35:09.805 Persistent Memory Region Support 00:35:09.805 ================================ 00:35:09.805 Supported: No 00:35:09.805 00:35:09.805 Admin Command Set Attributes 00:35:09.805 ============================ 00:35:09.805 Security Send/Receive: Not Supported 00:35:09.805 Format NVM: Not Supported 00:35:09.805 Firmware Activate/Download: Not Supported 00:35:09.805 Namespace Management: Not Supported 00:35:09.805 Device Self-Test: Not Supported 00:35:09.805 Directives: Not Supported 00:35:09.805 NVMe-MI: Not Supported 00:35:09.805 Virtualization Management: Not Supported 00:35:09.805 Doorbell Buffer Config: Not Supported 00:35:09.805 Get LBA Status Capability: Not Supported 00:35:09.805 Command & Feature Lockdown Capability: Not Supported 00:35:09.805 Abort Command Limit: 4 00:35:09.805 Async Event Request Limit: 4 00:35:09.805 Number of Firmware Slots: N/A 00:35:09.805 Firmware Slot 1 Read-Only: N/A 00:35:09.805 Firmware Activation Without Reset: N/A 00:35:09.805 Multiple Update Detection Support: N/A 00:35:09.805 Firmware Update Granularity: No Information Provided 00:35:09.805 Per-Namespace SMART Log: Yes 00:35:09.805 Asymmetric Namespace Access Log Page: Supported 00:35:09.805 ANA Transition Time : 10 sec 00:35:09.805 00:35:09.805 Asymmetric Namespace Access Capabilities 00:35:09.805 ANA Optimized State : Supported 00:35:09.805 ANA Non-Optimized State : Supported 00:35:09.805 ANA Inaccessible State : Supported 00:35:09.805 ANA Persistent Loss State : Supported 00:35:09.805 ANA Change State : Supported 00:35:09.805 ANAGRPID is not changed : No 00:35:09.805 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:09.805 00:35:09.805 ANA Group Identifier Maximum : 128 00:35:09.805 Number of ANA Group Identifiers : 128 00:35:09.805 Max Number of Allowed Namespaces : 1024 00:35:09.805 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:09.805 Command Effects Log Page: Supported 00:35:09.805 Get Log Page Extended Data: Supported 00:35:09.805 Telemetry Log Pages: Not Supported 00:35:09.805 Persistent Event Log Pages: Not Supported 00:35:09.805 Supported Log Pages Log Page: May Support 00:35:09.805 Commands Supported & Effects Log Page: Not Supported 00:35:09.805 Feature Identifiers & Effects Log Page:May Support 00:35:09.805 NVMe-MI Commands & Effects Log Page: May Support 00:35:09.805 Data Area 4 for Telemetry Log: Not Supported 00:35:09.805 Error Log Page Entries Supported: 128 00:35:09.805 Keep Alive: Supported 00:35:09.805 Keep Alive Granularity: 1000 ms 00:35:09.805 00:35:09.805 NVM Command Set Attributes 00:35:09.805 ========================== 00:35:09.805 Submission Queue Entry Size 00:35:09.805 Max: 64 00:35:09.805 Min: 64 00:35:09.805 Completion Queue Entry Size 00:35:09.805 Max: 16 00:35:09.805 Min: 16 00:35:09.805 Number of Namespaces: 1024 00:35:09.805 Compare Command: Not Supported 00:35:09.805 Write Uncorrectable Command: Not Supported 00:35:09.805 Dataset Management Command: Supported 00:35:09.805 Write Zeroes Command: Supported 00:35:09.805 Set Features Save Field: Not Supported 00:35:09.805 Reservations: Not Supported 00:35:09.805 Timestamp: Not Supported 00:35:09.805 Copy: Not Supported 00:35:09.805 Volatile Write Cache: Present 00:35:09.805 Atomic Write Unit (Normal): 1 00:35:09.805 Atomic Write Unit (PFail): 1 00:35:09.805 Atomic Compare & Write Unit: 1 00:35:09.805 Fused Compare & Write: Not Supported 00:35:09.805 Scatter-Gather List 00:35:09.805 SGL Command Set: Supported 00:35:09.805 SGL Keyed: Not Supported 00:35:09.805 SGL Bit Bucket Descriptor: Not Supported 00:35:09.805 SGL Metadata Pointer: Not Supported 00:35:09.805 Oversized SGL: Not Supported 00:35:09.805 SGL Metadata Address: Not Supported 00:35:09.805 SGL Offset: Supported 00:35:09.805 Transport SGL Data Block: Not Supported 00:35:09.805 Replay Protected Memory Block: Not Supported 00:35:09.805 00:35:09.805 Firmware Slot Information 00:35:09.805 ========================= 00:35:09.805 Active slot: 0 00:35:09.805 00:35:09.805 Asymmetric Namespace Access 00:35:09.805 =========================== 00:35:09.805 Change Count : 0 00:35:09.805 Number of ANA Group Descriptors : 1 00:35:09.805 ANA Group Descriptor : 0 00:35:09.805 ANA Group ID : 1 00:35:09.805 Number of NSID Values : 1 00:35:09.805 Change Count : 0 00:35:09.805 ANA State : 1 00:35:09.805 Namespace Identifier : 1 00:35:09.805 00:35:09.805 Commands Supported and Effects 00:35:09.805 ============================== 00:35:09.805 Admin Commands 00:35:09.805 -------------- 00:35:09.805 Get Log Page (02h): Supported 00:35:09.805 Identify (06h): Supported 00:35:09.805 Abort (08h): Supported 00:35:09.805 Set Features (09h): Supported 00:35:09.805 Get Features (0Ah): Supported 00:35:09.805 Asynchronous Event Request (0Ch): Supported 00:35:09.805 Keep Alive (18h): Supported 00:35:09.805 I/O Commands 00:35:09.805 ------------ 00:35:09.805 Flush (00h): Supported 00:35:09.805 Write (01h): Supported LBA-Change 00:35:09.805 Read (02h): Supported 00:35:09.805 Write Zeroes (08h): Supported LBA-Change 00:35:09.805 Dataset Management (09h): Supported 00:35:09.805 00:35:09.805 Error Log 00:35:09.805 ========= 00:35:09.805 Entry: 0 00:35:09.805 Error Count: 0x3 00:35:09.805 Submission Queue Id: 0x0 00:35:09.805 Command Id: 0x5 00:35:09.805 Phase Bit: 0 00:35:09.805 Status Code: 0x2 00:35:09.805 Status Code Type: 0x0 00:35:09.805 Do Not Retry: 1 00:35:10.065 Error Location: 0x28 00:35:10.065 LBA: 0x0 00:35:10.065 Namespace: 0x0 00:35:10.065 Vendor Log Page: 0x0 00:35:10.065 ----------- 00:35:10.065 Entry: 1 00:35:10.065 Error Count: 0x2 00:35:10.065 Submission Queue Id: 0x0 00:35:10.065 Command Id: 0x5 00:35:10.065 Phase Bit: 0 00:35:10.065 Status Code: 0x2 00:35:10.065 Status Code Type: 0x0 00:35:10.065 Do Not Retry: 1 00:35:10.065 Error Location: 0x28 00:35:10.065 LBA: 0x0 00:35:10.065 Namespace: 0x0 00:35:10.065 Vendor Log Page: 0x0 00:35:10.065 ----------- 00:35:10.065 Entry: 2 00:35:10.065 Error Count: 0x1 00:35:10.065 Submission Queue Id: 0x0 00:35:10.065 Command Id: 0x4 00:35:10.065 Phase Bit: 0 00:35:10.065 Status Code: 0x2 00:35:10.065 Status Code Type: 0x0 00:35:10.065 Do Not Retry: 1 00:35:10.065 Error Location: 0x28 00:35:10.065 LBA: 0x0 00:35:10.065 Namespace: 0x0 00:35:10.065 Vendor Log Page: 0x0 00:35:10.065 00:35:10.065 Number of Queues 00:35:10.065 ================ 00:35:10.065 Number of I/O Submission Queues: 128 00:35:10.065 Number of I/O Completion Queues: 128 00:35:10.065 00:35:10.065 ZNS Specific Controller Data 00:35:10.065 ============================ 00:35:10.065 Zone Append Size Limit: 0 00:35:10.065 00:35:10.065 00:35:10.065 Active Namespaces 00:35:10.065 ================= 00:35:10.065 get_feature(0x05) failed 00:35:10.065 Namespace ID:1 00:35:10.065 Command Set Identifier: NVM (00h) 00:35:10.065 Deallocate: Supported 00:35:10.065 Deallocated/Unwritten Error: Not Supported 00:35:10.065 Deallocated Read Value: Unknown 00:35:10.065 Deallocate in Write Zeroes: Not Supported 00:35:10.065 Deallocated Guard Field: 0xFFFF 00:35:10.065 Flush: Supported 00:35:10.065 Reservation: Not Supported 00:35:10.065 Namespace Sharing Capabilities: Multiple Controllers 00:35:10.065 Size (in LBAs): 1953525168 (931GiB) 00:35:10.065 Capacity (in LBAs): 1953525168 (931GiB) 00:35:10.065 Utilization (in LBAs): 1953525168 (931GiB) 00:35:10.065 UUID: 9af1fa77-60ab-4896-8763-b015ac8029c0 00:35:10.065 Thin Provisioning: Not Supported 00:35:10.065 Per-NS Atomic Units: Yes 00:35:10.065 Atomic Boundary Size (Normal): 0 00:35:10.065 Atomic Boundary Size (PFail): 0 00:35:10.065 Atomic Boundary Offset: 0 00:35:10.065 NGUID/EUI64 Never Reused: No 00:35:10.065 ANA group ID: 1 00:35:10.065 Namespace Write Protected: No 00:35:10.065 Number of LBA Formats: 1 00:35:10.065 Current LBA Format: LBA Format #00 00:35:10.065 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:10.065 00:35:10.065 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:10.065 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:10.065 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:35:10.065 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:10.065 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:35:10.065 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:10.065 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:10.065 rmmod nvme_tcp 00:35:10.065 rmmod nvme_fabrics 00:35:10.065 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:10.065 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:35:10.065 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:35:10.066 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:35:10.066 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:10.066 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:10.066 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:10.066 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:10.066 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:10.066 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.066 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:10.066 13:46:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:11.972 13:46:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:11.972 13:46:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:11.972 13:46:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:11.972 13:46:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:35:11.972 13:46:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:11.972 13:46:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:11.972 13:46:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:11.972 13:46:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:11.972 13:46:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:11.972 13:46:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:11.972 13:46:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:13.346 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:13.346 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:13.346 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:13.346 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:13.346 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:13.346 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:13.346 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:13.346 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:13.346 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:13.346 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:13.346 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:13.346 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:13.346 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:13.346 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:13.346 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:13.346 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:14.281 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:14.281 00:35:14.281 real 0m9.621s 00:35:14.281 user 0m2.067s 00:35:14.281 sys 0m3.457s 00:35:14.281 13:46:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:14.281 13:46:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:14.281 ************************************ 00:35:14.281 END TEST nvmf_identify_kernel_target 00:35:14.281 ************************************ 00:35:14.539 13:46:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:14.539 13:46:49 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:14.539 13:46:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:14.539 13:46:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:14.539 13:46:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:14.539 ************************************ 00:35:14.539 START TEST nvmf_auth_host 00:35:14.539 ************************************ 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:14.539 * Looking for test storage... 00:35:14.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:35:14.539 13:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:16.438 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:16.439 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:16.439 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:16.439 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:16.439 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:16.439 13:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:16.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:16.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:35:16.439 00:35:16.439 --- 10.0.0.2 ping statistics --- 00:35:16.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.439 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:16.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:16.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:35:16.439 00:35:16.439 --- 10.0.0.1 ping statistics --- 00:35:16.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.439 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=436845 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 436845 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 436845 ']' 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:16.439 13:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.812 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:17.812 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:35:17.812 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:17.812 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:17.812 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.812 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:17.812 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:17.812 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:17.812 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=720ab483cf57e5c2eb8011e622b6a5ff 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.mBV 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 720ab483cf57e5c2eb8011e622b6a5ff 0 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 720ab483cf57e5c2eb8011e622b6a5ff 0 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=720ab483cf57e5c2eb8011e622b6a5ff 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.mBV 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.mBV 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.mBV 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aeb8b252eaba4a2829196e272fcfdc74be8e09bfac5472a19dd8f724928908c8 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.AKw 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aeb8b252eaba4a2829196e272fcfdc74be8e09bfac5472a19dd8f724928908c8 3 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aeb8b252eaba4a2829196e272fcfdc74be8e09bfac5472a19dd8f724928908c8 3 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aeb8b252eaba4a2829196e272fcfdc74be8e09bfac5472a19dd8f724928908c8 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.AKw 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.AKw 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.AKw 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ff057e2099fa790129b6e83d48b6f90fca9a3322de0904a9 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.GMs 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ff057e2099fa790129b6e83d48b6f90fca9a3322de0904a9 0 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ff057e2099fa790129b6e83d48b6f90fca9a3322de0904a9 0 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ff057e2099fa790129b6e83d48b6f90fca9a3322de0904a9 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.GMs 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.GMs 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.GMs 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2348baf2bb7715572d01d3a350d409428f47eb6a932610d3 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.YGP 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2348baf2bb7715572d01d3a350d409428f47eb6a932610d3 2 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2348baf2bb7715572d01d3a350d409428f47eb6a932610d3 2 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2348baf2bb7715572d01d3a350d409428f47eb6a932610d3 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.YGP 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.YGP 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.YGP 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a19739f73ad2caab5fd892c3287f7b1d 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ClF 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a19739f73ad2caab5fd892c3287f7b1d 1 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a19739f73ad2caab5fd892c3287f7b1d 1 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a19739f73ad2caab5fd892c3287f7b1d 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ClF 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ClF 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ClF 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8439a46d74aa12317a3a9b4023293607 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.4YK 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8439a46d74aa12317a3a9b4023293607 1 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8439a46d74aa12317a3a9b4023293607 1 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8439a46d74aa12317a3a9b4023293607 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.4YK 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.4YK 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.4YK 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=240858f206337ef590190066e32040a10ba5d235ae738f15 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:17.813 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.hHl 00:35:17.814 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 240858f206337ef590190066e32040a10ba5d235ae738f15 2 00:35:17.814 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 240858f206337ef590190066e32040a10ba5d235ae738f15 2 00:35:17.814 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:17.814 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:17.814 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=240858f206337ef590190066e32040a10ba5d235ae738f15 00:35:17.814 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:17.814 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.hHl 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.hHl 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.hHl 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0817a9938c8bfe7b1f3d111b15281221 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.rvs 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0817a9938c8bfe7b1f3d111b15281221 0 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0817a9938c8bfe7b1f3d111b15281221 0 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0817a9938c8bfe7b1f3d111b15281221 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:18.071 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.rvs 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.rvs 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.rvs 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=efe1bf39cddef9fb49b4046b470c21881a45183aaab4422895e6e34850cb37f0 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.mro 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key efe1bf39cddef9fb49b4046b470c21881a45183aaab4422895e6e34850cb37f0 3 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 efe1bf39cddef9fb49b4046b470c21881a45183aaab4422895e6e34850cb37f0 3 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=efe1bf39cddef9fb49b4046b470c21881a45183aaab4422895e6e34850cb37f0 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.mro 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.mro 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.mro 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 436845 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 436845 ']' 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:18.072 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.328 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:18.328 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.mBV 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.AKw ]] 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AKw 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.GMs 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.YGP ]] 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YGP 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ClF 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.4YK ]] 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4YK 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.329 13:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.hHl 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.rvs ]] 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.rvs 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.mro 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:18.329 13:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:19.700 Waiting for block devices as requested 00:35:19.700 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:19.700 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:19.700 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:19.958 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:19.958 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:19.958 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:19.958 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:20.215 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:20.215 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:20.215 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:20.216 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:20.473 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:20.473 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:20.473 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:20.473 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:20.731 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:20.731 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:21.330 No valid GPT data, bailing 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:21.330 00:35:21.330 Discovery Log Number of Records 2, Generation counter 2 00:35:21.330 =====Discovery Log Entry 0====== 00:35:21.330 trtype: tcp 00:35:21.330 adrfam: ipv4 00:35:21.330 subtype: current discovery subsystem 00:35:21.330 treq: not specified, sq flow control disable supported 00:35:21.330 portid: 1 00:35:21.330 trsvcid: 4420 00:35:21.330 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:21.330 traddr: 10.0.0.1 00:35:21.330 eflags: none 00:35:21.330 sectype: none 00:35:21.330 =====Discovery Log Entry 1====== 00:35:21.330 trtype: tcp 00:35:21.330 adrfam: ipv4 00:35:21.330 subtype: nvme subsystem 00:35:21.330 treq: not specified, sq flow control disable supported 00:35:21.330 portid: 1 00:35:21.330 trsvcid: 4420 00:35:21.330 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:21.330 traddr: 10.0.0.1 00:35:21.330 eflags: none 00:35:21.330 sectype: none 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.330 13:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.589 nvme0n1 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: ]] 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.589 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.848 nvme0n1 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.848 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.107 nvme0n1 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: ]] 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.107 nvme0n1 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.107 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: ]] 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.366 13:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.366 nvme0n1 00:35:22.366 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.366 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.366 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.366 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.366 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.366 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.366 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.366 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.366 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.366 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.625 nvme0n1 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:22.625 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: ]] 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.626 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.884 nvme0n1 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.884 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.142 nvme0n1 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.142 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: ]] 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.400 13:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.400 nvme0n1 00:35:23.400 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.400 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.400 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.400 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.400 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.400 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.400 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.400 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.400 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.400 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: ]] 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.657 nvme0n1 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.657 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.914 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.915 nvme0n1 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.915 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: ]] 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.173 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.454 nvme0n1 00:35:24.454 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.454 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.454 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.454 13:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.454 13:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.454 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.712 nvme0n1 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: ]] 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.712 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.970 nvme0n1 00:35:24.970 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.970 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.970 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.970 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.970 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: ]] 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.227 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:25.228 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:25.228 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:25.228 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.228 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.228 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:25.228 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.228 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:25.228 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:25.228 13:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:25.228 13:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:25.228 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.228 13:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.484 nvme0n1 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.484 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.485 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.741 nvme0n1 00:35:25.742 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.742 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.742 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.742 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.742 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.742 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.742 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.742 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.742 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.742 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: ]] 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.999 13:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.564 nvme0n1 00:35:26.564 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.564 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.564 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.564 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.564 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.564 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.564 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.564 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.565 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.131 nvme0n1 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: ]] 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.131 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:27.132 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:27.132 13:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:27.132 13:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:27.132 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.132 13:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.698 nvme0n1 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: ]] 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:27.698 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.699 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.266 nvme0n1 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.266 13:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.831 nvme0n1 00:35:28.831 13:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.831 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.831 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.831 13:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.831 13:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.831 13:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: ]] 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:29.088 13:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.089 13:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.089 13:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:29.089 13:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.089 13:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:29.089 13:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:29.089 13:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:29.089 13:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:29.089 13:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.089 13:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.020 nvme0n1 00:35:30.020 13:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.020 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.020 13:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.020 13:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.020 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.020 13:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.020 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.020 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.020 13:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.020 13:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.020 13:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.020 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.021 13:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.953 nvme0n1 00:35:30.953 13:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.953 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.953 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.953 13:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.953 13:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.953 13:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.953 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.953 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.953 13:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.953 13:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: ]] 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.211 13:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.140 nvme0n1 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: ]] 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.140 13:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.069 nvme0n1 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.069 13:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.440 nvme0n1 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: ]] 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.440 13:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.440 nvme0n1 00:35:34.440 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.440 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.440 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.440 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.440 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.440 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.440 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.440 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.440 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.440 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.441 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.726 nvme0n1 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: ]] 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.726 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.990 nvme0n1 00:35:34.990 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.990 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.990 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.990 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.990 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.990 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: ]] 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.991 nvme0n1 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.991 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.250 nvme0n1 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.250 13:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.509 nvme0n1 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.509 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.768 nvme0n1 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: ]] 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.768 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.026 nvme0n1 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: ]] 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.027 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.285 nvme0n1 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.285 13:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.285 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.285 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.285 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.285 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.285 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.285 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.285 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:36.285 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.285 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.285 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:36.543 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.544 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.544 nvme0n1 00:35:36.544 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.544 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.544 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.544 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.544 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.544 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.544 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.544 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.544 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.544 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: ]] 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.802 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.060 nvme0n1 00:35:37.060 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.060 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.060 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.060 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.060 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.060 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.060 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.060 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.060 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.060 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.060 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.061 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.319 nvme0n1 00:35:37.319 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.319 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.319 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.319 13:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.319 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.320 13:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: ]] 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.320 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.886 nvme0n1 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:37.886 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: ]] 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.887 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.146 nvme0n1 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.146 13:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.403 nvme0n1 00:35:38.403 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: ]] 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.404 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.967 nvme0n1 00:35:38.967 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.967 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.967 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.967 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.967 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.967 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.967 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.967 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.967 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.967 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.225 13:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.790 nvme0n1 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: ]] 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.790 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.355 nvme0n1 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: ]] 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.355 13:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.920 nvme0n1 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.920 13:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.921 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.921 13:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:40.921 13:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:40.921 13:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:40.921 13:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.921 13:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.921 13:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:40.921 13:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.921 13:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:40.921 13:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:40.921 13:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:40.921 13:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:40.921 13:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.921 13:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.485 nvme0n1 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: ]] 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.486 13:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.419 nvme0n1 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.419 13:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.793 nvme0n1 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: ]] 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:43.793 13:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:43.794 13:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:43.794 13:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.794 13:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.794 13:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:43.794 13:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.794 13:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:43.794 13:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:43.794 13:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:43.794 13:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:43.794 13:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.794 13:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.727 nvme0n1 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: ]] 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.727 13:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.663 nvme0n1 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.663 13:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.597 nvme0n1 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: ]] 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.597 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.855 nvme0n1 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.855 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.113 nvme0n1 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: ]] 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.113 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.371 nvme0n1 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.371 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: ]] 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.372 13:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.630 nvme0n1 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.630 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.907 nvme0n1 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: ]] 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:47.907 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:47.908 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:47.908 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.908 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.178 nvme0n1 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.178 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.435 nvme0n1 00:35:48.435 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.435 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.435 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.435 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.435 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.435 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.435 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.435 13:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.435 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.435 13:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: ]] 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.435 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.692 nvme0n1 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: ]] 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.692 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.949 nvme0n1 00:35:48.949 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.950 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.207 nvme0n1 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: ]] 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.207 13:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.465 nvme0n1 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.465 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.723 nvme0n1 00:35:49.723 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.723 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.723 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: ]] 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.981 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.240 nvme0n1 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: ]] 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.240 13:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.498 nvme0n1 00:35:50.498 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.499 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.499 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.499 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.499 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.499 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.757 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.016 nvme0n1 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: ]] 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.016 13:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.578 nvme0n1 00:35:51.578 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.579 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.143 nvme0n1 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: ]] 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.143 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.400 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.400 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.400 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:52.400 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:52.400 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:52.400 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.400 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.400 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:52.400 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.400 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:52.400 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:52.400 13:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:52.400 13:47:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:52.400 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.400 13:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.966 nvme0n1 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: ]] 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.966 13:47:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.532 nvme0n1 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.533 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.099 nvme0n1 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:54.099 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIwYWI0ODNjZjU3ZTVjMmViODAxMWU2MjJiNmE1ZmadN2KT: 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: ]] 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWViOGIyNTJlYWJhNGEyODI5MTk2ZTI3MmZjZmRjNzRiZThlMDliZmFjNTQ3MmExOWRkOGY3MjQ5Mjg5MDhjOBoPNc8=: 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.100 13:47:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.033 nvme0n1 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.033 13:47:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.967 nvme0n1 00:35:55.968 13:47:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.968 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.968 13:47:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.968 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.968 13:47:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.968 13:47:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTE5NzM5ZjczYWQyY2FhYjVmZDg5MmMzMjg3ZjdiMWQylK+6: 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: ]] 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQzOWE0NmQ3NGFhMTIzMTdhM2E5YjQwMjMyOTM2MDcXEUi0: 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.226 13:47:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.161 nvme0n1 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjQwODU4ZjIwNjMzN2VmNTkwMTkwMDY2ZTMyMDQwYTEwYmE1ZDIzNWFlNzM4ZjE1T0NjZg==: 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: ]] 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgxN2E5OTM4YzhiZmU3YjFmM2QxMTFiMTUyODEyMjH4gbQK: 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.161 13:47:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.096 nvme0n1 00:35:58.096 13:47:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.096 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.096 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.096 13:47:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.096 13:47:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.096 13:47:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.096 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.096 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.096 13:47:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.096 13:47:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.096 13:47:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.096 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWZlMWJmMzljZGRlZjlmYjQ5YjQwNDZiNDcwYzIxODgxYTQ1MTgzYWFhYjQ0MjI4OTVlNmUzNDg1MGNiMzdmMJgnT2s=: 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.097 13:47:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.038 nvme0n1 00:35:59.038 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.038 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.038 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.038 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.038 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.038 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.038 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.038 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.038 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.038 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.038 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.038 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:59.038 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.038 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.038 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYwNTdlMjA5OWZhNzkwMTI5YjZlODNkNDhiNmY5MGZjYTlhMzMyMmRlMDkwNGE5dQfUkg==: 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: ]] 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjM0OGJhZjJiYjc3MTU1NzJkMDFkM2EzNTBkNDA5NDI4ZjQ3ZWI2YTkzMjYxMGQz98GfJg==: 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.039 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.297 request: 00:35:59.297 { 00:35:59.297 "name": "nvme0", 00:35:59.297 "trtype": "tcp", 00:35:59.297 "traddr": "10.0.0.1", 00:35:59.297 "adrfam": "ipv4", 00:35:59.297 "trsvcid": "4420", 00:35:59.297 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:59.297 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:59.297 "prchk_reftag": false, 00:35:59.297 "prchk_guard": false, 00:35:59.297 "hdgst": false, 00:35:59.297 "ddgst": false, 00:35:59.297 "method": "bdev_nvme_attach_controller", 00:35:59.297 "req_id": 1 00:35:59.297 } 00:35:59.297 Got JSON-RPC error response 00:35:59.297 response: 00:35:59.297 { 00:35:59.297 "code": -5, 00:35:59.297 "message": "Input/output error" 00:35:59.297 } 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.297 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.297 request: 00:35:59.297 { 00:35:59.297 "name": "nvme0", 00:35:59.297 "trtype": "tcp", 00:35:59.297 "traddr": "10.0.0.1", 00:35:59.297 "adrfam": "ipv4", 00:35:59.297 "trsvcid": "4420", 00:35:59.297 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:59.298 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:59.298 "prchk_reftag": false, 00:35:59.298 "prchk_guard": false, 00:35:59.298 "hdgst": false, 00:35:59.298 "ddgst": false, 00:35:59.298 "dhchap_key": "key2", 00:35:59.298 "method": "bdev_nvme_attach_controller", 00:35:59.298 "req_id": 1 00:35:59.298 } 00:35:59.298 Got JSON-RPC error response 00:35:59.298 response: 00:35:59.298 { 00:35:59.298 "code": -5, 00:35:59.298 "message": "Input/output error" 00:35:59.298 } 00:35:59.298 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:59.298 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:59.298 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:59.298 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:59.298 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:59.298 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.298 13:47:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:59.298 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.298 13:47:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.298 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.556 request: 00:35:59.556 { 00:35:59.556 "name": "nvme0", 00:35:59.556 "trtype": "tcp", 00:35:59.556 "traddr": "10.0.0.1", 00:35:59.556 "adrfam": "ipv4", 00:35:59.556 "trsvcid": "4420", 00:35:59.556 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:59.556 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:59.556 "prchk_reftag": false, 00:35:59.556 "prchk_guard": false, 00:35:59.556 "hdgst": false, 00:35:59.556 "ddgst": false, 00:35:59.556 "dhchap_key": "key1", 00:35:59.556 "dhchap_ctrlr_key": "ckey2", 00:35:59.556 "method": "bdev_nvme_attach_controller", 00:35:59.556 "req_id": 1 00:35:59.556 } 00:35:59.556 Got JSON-RPC error response 00:35:59.556 response: 00:35:59.556 { 00:35:59.556 "code": -5, 00:35:59.556 "message": "Input/output error" 00:35:59.556 } 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:59.556 rmmod nvme_tcp 00:35:59.556 rmmod nvme_fabrics 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 436845 ']' 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 436845 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 436845 ']' 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 436845 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 436845 00:35:59.556 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:59.557 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:59.557 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 436845' 00:35:59.557 killing process with pid 436845 00:35:59.557 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 436845 00:35:59.557 13:47:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 436845 00:36:00.933 13:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:00.933 13:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:00.933 13:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:00.933 13:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:00.933 13:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:00.933 13:47:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.933 13:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:00.933 13:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.886 13:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:02.886 13:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:02.886 13:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:02.886 13:47:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:02.886 13:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:02.886 13:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:36:02.886 13:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:02.886 13:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:02.886 13:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:02.886 13:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:02.886 13:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:02.886 13:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:02.886 13:47:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:04.260 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:04.260 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:04.260 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:04.260 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:04.260 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:04.260 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:04.260 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:04.260 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:04.260 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:04.260 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:04.260 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:04.260 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:04.260 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:04.260 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:04.260 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:04.260 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:05.204 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:05.204 13:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.mBV /tmp/spdk.key-null.GMs /tmp/spdk.key-sha256.ClF /tmp/spdk.key-sha384.hHl /tmp/spdk.key-sha512.mro /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:05.204 13:47:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:06.578 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:06.578 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:06.578 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:06.578 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:06.578 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:06.578 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:06.578 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:06.578 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:06.578 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:06.578 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:06.578 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:06.578 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:06.578 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:06.578 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:06.578 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:06.578 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:06.578 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:06.578 00:36:06.578 real 0m52.040s 00:36:06.578 user 0m49.702s 00:36:06.578 sys 0m6.115s 00:36:06.578 13:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:06.578 13:47:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.578 ************************************ 00:36:06.578 END TEST nvmf_auth_host 00:36:06.578 ************************************ 00:36:06.578 13:47:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:06.578 13:47:41 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:36:06.578 13:47:41 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:06.578 13:47:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:06.578 13:47:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:06.578 13:47:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:06.578 ************************************ 00:36:06.578 START TEST nvmf_digest 00:36:06.578 ************************************ 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:06.578 * Looking for test storage... 00:36:06.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:06.578 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:36:06.579 13:47:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:08.478 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:08.478 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:08.479 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:08.479 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:08.479 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:08.479 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:08.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:08.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:36:08.737 00:36:08.737 --- 10.0.0.2 ping statistics --- 00:36:08.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.737 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:08.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:08.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:36:08.737 00:36:08.737 --- 10.0.0.1 ping statistics --- 00:36:08.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.737 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:08.737 ************************************ 00:36:08.737 START TEST nvmf_digest_clean 00:36:08.737 ************************************ 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=446675 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 446675 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 446675 ']' 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:08.737 13:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:08.737 [2024-07-13 13:47:43.434024] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:08.737 [2024-07-13 13:47:43.434184] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:08.995 EAL: No free 2048 kB hugepages reported on node 1 00:36:08.995 [2024-07-13 13:47:43.578433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.253 [2024-07-13 13:47:43.837631] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:09.253 [2024-07-13 13:47:43.837713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:09.253 [2024-07-13 13:47:43.837754] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:09.253 [2024-07-13 13:47:43.837779] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:09.253 [2024-07-13 13:47:43.837801] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:09.253 [2024-07-13 13:47:43.837849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:09.817 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:09.817 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:09.817 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:09.817 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:09.817 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:09.817 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:09.817 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:09.817 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:09.817 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:09.817 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.817 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:10.075 null0 00:36:10.075 [2024-07-13 13:47:44.752969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:10.075 [2024-07-13 13:47:44.777215] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=446829 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 446829 /var/tmp/bperf.sock 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 446829 ']' 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:10.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:10.075 13:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:10.334 [2024-07-13 13:47:44.868731] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:10.334 [2024-07-13 13:47:44.868912] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid446829 ] 00:36:10.334 EAL: No free 2048 kB hugepages reported on node 1 00:36:10.334 [2024-07-13 13:47:45.012537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:10.592 [2024-07-13 13:47:45.275433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:11.156 13:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:11.156 13:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:11.156 13:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:11.156 13:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:11.156 13:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:11.722 13:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:11.722 13:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:12.335 nvme0n1 00:36:12.335 13:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:12.335 13:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:12.335 Running I/O for 2 seconds... 00:36:14.230 00:36:14.230 Latency(us) 00:36:14.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.230 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:14.230 nvme0n1 : 2.01 13646.32 53.31 0.00 0.00 9365.87 5048.70 22039.51 00:36:14.230 =================================================================================================================== 00:36:14.230 Total : 13646.32 53.31 0.00 0.00 9365.87 5048.70 22039.51 00:36:14.230 0 00:36:14.230 13:47:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:14.230 13:47:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:14.230 13:47:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:14.230 13:47:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:14.230 13:47:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:14.230 | select(.opcode=="crc32c") 00:36:14.230 | "\(.module_name) \(.executed)"' 00:36:14.488 13:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:14.488 13:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:14.488 13:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:14.488 13:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:14.489 13:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 446829 00:36:14.489 13:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 446829 ']' 00:36:14.489 13:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 446829 00:36:14.489 13:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:14.489 13:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:14.489 13:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 446829 00:36:14.489 13:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:14.489 13:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:14.489 13:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 446829' 00:36:14.489 killing process with pid 446829 00:36:14.489 13:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 446829 00:36:14.489 Received shutdown signal, test time was about 2.000000 seconds 00:36:14.489 00:36:14.489 Latency(us) 00:36:14.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.489 =================================================================================================================== 00:36:14.489 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:14.489 13:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 446829 00:36:15.864 13:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:15.864 13:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:15.864 13:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:15.864 13:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:15.864 13:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:15.864 13:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:15.864 13:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:15.864 13:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=447494 00:36:15.864 13:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:15.864 13:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 447494 /var/tmp/bperf.sock 00:36:15.864 13:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 447494 ']' 00:36:15.864 13:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:15.864 13:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:15.864 13:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:15.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:15.864 13:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:15.864 13:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:15.864 [2024-07-13 13:47:50.324818] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:15.864 [2024-07-13 13:47:50.325014] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid447494 ] 00:36:15.864 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:15.864 Zero copy mechanism will not be used. 00:36:15.864 EAL: No free 2048 kB hugepages reported on node 1 00:36:15.864 [2024-07-13 13:47:50.448806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.123 [2024-07-13 13:47:50.675646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.688 13:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:16.688 13:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:16.688 13:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:16.688 13:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:16.688 13:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:17.254 13:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:17.254 13:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:17.821 nvme0n1 00:36:17.821 13:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:17.821 13:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:17.821 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:17.821 Zero copy mechanism will not be used. 00:36:17.821 Running I/O for 2 seconds... 00:36:19.753 00:36:19.753 Latency(us) 00:36:19.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:19.753 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:19.753 nvme0n1 : 2.00 2672.22 334.03 0.00 0.00 5980.82 1771.90 13689.74 00:36:19.753 =================================================================================================================== 00:36:19.753 Total : 2672.22 334.03 0.00 0.00 5980.82 1771.90 13689.74 00:36:19.753 0 00:36:19.753 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:19.753 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:19.753 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:19.753 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:19.753 | select(.opcode=="crc32c") 00:36:19.753 | "\(.module_name) \(.executed)"' 00:36:19.753 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:20.011 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:20.011 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:20.011 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:20.011 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:20.011 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 447494 00:36:20.011 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 447494 ']' 00:36:20.011 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 447494 00:36:20.011 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:20.011 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:20.011 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 447494 00:36:20.269 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:20.269 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:20.269 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 447494' 00:36:20.269 killing process with pid 447494 00:36:20.269 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 447494 00:36:20.269 Received shutdown signal, test time was about 2.000000 seconds 00:36:20.269 00:36:20.269 Latency(us) 00:36:20.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.269 =================================================================================================================== 00:36:20.269 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:20.269 13:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 447494 00:36:21.202 13:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:21.202 13:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:21.202 13:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:21.202 13:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:21.202 13:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:21.202 13:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:21.202 13:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:21.202 13:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=448164 00:36:21.202 13:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 448164 /var/tmp/bperf.sock 00:36:21.202 13:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:21.202 13:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 448164 ']' 00:36:21.202 13:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:21.202 13:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:21.202 13:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:21.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:21.202 13:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:21.202 13:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:21.202 [2024-07-13 13:47:55.897616] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:21.202 [2024-07-13 13:47:55.897783] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid448164 ] 00:36:21.461 EAL: No free 2048 kB hugepages reported on node 1 00:36:21.461 [2024-07-13 13:47:56.026232] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:21.720 [2024-07-13 13:47:56.279702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:22.286 13:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:22.286 13:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:22.286 13:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:22.286 13:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:22.286 13:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:22.853 13:47:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:22.853 13:47:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:23.111 nvme0n1 00:36:23.111 13:47:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:23.111 13:47:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:23.370 Running I/O for 2 seconds... 00:36:25.273 00:36:25.273 Latency(us) 00:36:25.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.273 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:25.273 nvme0n1 : 2.01 16558.40 64.68 0.00 0.00 7714.66 3373.89 13981.01 00:36:25.273 =================================================================================================================== 00:36:25.273 Total : 16558.40 64.68 0.00 0.00 7714.66 3373.89 13981.01 00:36:25.273 0 00:36:25.273 13:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:25.273 13:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:25.273 13:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:25.273 13:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:25.273 13:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:25.273 | select(.opcode=="crc32c") 00:36:25.273 | "\(.module_name) \(.executed)"' 00:36:25.532 13:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:25.532 13:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:25.532 13:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:25.532 13:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:25.532 13:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 448164 00:36:25.532 13:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 448164 ']' 00:36:25.532 13:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 448164 00:36:25.532 13:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:25.532 13:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:25.532 13:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 448164 00:36:25.532 13:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:25.532 13:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:25.532 13:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 448164' 00:36:25.532 killing process with pid 448164 00:36:25.532 13:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 448164 00:36:25.532 Received shutdown signal, test time was about 2.000000 seconds 00:36:25.532 00:36:25.532 Latency(us) 00:36:25.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.532 =================================================================================================================== 00:36:25.532 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:25.532 13:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 448164 00:36:26.908 13:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:26.908 13:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:26.908 13:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:26.908 13:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:26.908 13:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:26.908 13:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:26.908 13:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:26.908 13:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=448904 00:36:26.908 13:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 448904 /var/tmp/bperf.sock 00:36:26.908 13:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:26.908 13:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 448904 ']' 00:36:26.908 13:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:26.908 13:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:26.908 13:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:26.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:26.908 13:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:26.908 13:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:26.908 [2024-07-13 13:48:01.405700] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:26.908 [2024-07-13 13:48:01.405864] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid448904 ] 00:36:26.908 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:26.908 Zero copy mechanism will not be used. 00:36:26.908 EAL: No free 2048 kB hugepages reported on node 1 00:36:26.908 [2024-07-13 13:48:01.534577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.166 [2024-07-13 13:48:01.780692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.733 13:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:27.733 13:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:27.733 13:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:27.733 13:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:27.733 13:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:28.302 13:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:28.302 13:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:28.560 nvme0n1 00:36:28.560 13:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:28.560 13:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:28.820 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:28.820 Zero copy mechanism will not be used. 00:36:28.820 Running I/O for 2 seconds... 00:36:30.726 00:36:30.726 Latency(us) 00:36:30.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.726 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:30.726 nvme0n1 : 2.01 2513.55 314.19 0.00 0.00 6348.32 4805.97 11699.39 00:36:30.726 =================================================================================================================== 00:36:30.726 Total : 2513.55 314.19 0.00 0.00 6348.32 4805.97 11699.39 00:36:30.726 0 00:36:30.726 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:30.726 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:30.726 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:30.726 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:30.726 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:30.726 | select(.opcode=="crc32c") 00:36:30.726 | "\(.module_name) \(.executed)"' 00:36:30.984 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:30.984 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:30.984 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:30.984 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:30.984 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 448904 00:36:30.984 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 448904 ']' 00:36:30.984 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 448904 00:36:30.984 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:30.984 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:30.984 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 448904 00:36:30.984 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:30.984 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:30.984 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 448904' 00:36:30.984 killing process with pid 448904 00:36:30.984 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 448904 00:36:30.984 Received shutdown signal, test time was about 2.000000 seconds 00:36:30.984 00:36:30.984 Latency(us) 00:36:30.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.984 =================================================================================================================== 00:36:30.984 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:30.984 13:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 448904 00:36:32.365 13:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 446675 00:36:32.365 13:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 446675 ']' 00:36:32.365 13:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 446675 00:36:32.365 13:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:32.365 13:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:32.365 13:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 446675 00:36:32.365 13:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:32.365 13:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:32.365 13:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 446675' 00:36:32.365 killing process with pid 446675 00:36:32.365 13:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 446675 00:36:32.365 13:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 446675 00:36:33.742 00:36:33.742 real 0m24.795s 00:36:33.742 user 0m47.897s 00:36:33.742 sys 0m4.615s 00:36:33.742 13:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:33.742 13:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:33.742 ************************************ 00:36:33.742 END TEST nvmf_digest_clean 00:36:33.742 ************************************ 00:36:33.742 13:48:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:36:33.742 13:48:08 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:33.742 13:48:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:33.742 13:48:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:33.742 13:48:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:33.742 ************************************ 00:36:33.742 START TEST nvmf_digest_error 00:36:33.742 ************************************ 00:36:33.742 13:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:36:33.743 13:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:33.743 13:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:33.743 13:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:33.743 13:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:33.743 13:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=450120 00:36:33.743 13:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:33.743 13:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 450120 00:36:33.743 13:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 450120 ']' 00:36:33.743 13:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:33.743 13:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:33.743 13:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:33.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:33.743 13:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:33.743 13:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:33.743 [2024-07-13 13:48:08.279012] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:33.743 [2024-07-13 13:48:08.279165] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:33.743 EAL: No free 2048 kB hugepages reported on node 1 00:36:33.743 [2024-07-13 13:48:08.421423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.025 [2024-07-13 13:48:08.687828] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:34.025 [2024-07-13 13:48:08.687928] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:34.025 [2024-07-13 13:48:08.687959] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:34.025 [2024-07-13 13:48:08.687984] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:34.025 [2024-07-13 13:48:08.688006] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:34.025 [2024-07-13 13:48:08.688053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.592 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:34.592 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:34.592 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:34.592 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:34.592 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.592 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:34.592 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:34.592 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.592 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.592 [2024-07-13 13:48:09.286435] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:34.592 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.592 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:34.592 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:34.592 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.592 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:35.162 null0 00:36:35.162 [2024-07-13 13:48:09.666105] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:35.162 [2024-07-13 13:48:09.690332] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:35.162 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.162 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:35.162 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:35.162 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:35.162 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:35.162 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:35.162 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=450430 00:36:35.162 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 450430 /var/tmp/bperf.sock 00:36:35.162 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:35.162 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 450430 ']' 00:36:35.162 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:35.162 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:35.162 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:35.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:35.162 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:35.162 13:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:35.162 [2024-07-13 13:48:09.769637] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:35.162 [2024-07-13 13:48:09.769786] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450430 ] 00:36:35.162 EAL: No free 2048 kB hugepages reported on node 1 00:36:35.162 [2024-07-13 13:48:09.894274] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.422 [2024-07-13 13:48:10.146046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.989 13:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:35.989 13:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:35.989 13:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:35.989 13:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:36.247 13:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:36.247 13:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.247 13:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:36.247 13:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.247 13:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.247 13:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.816 nvme0n1 00:36:36.816 13:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:36.816 13:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.816 13:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:36.816 13:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.816 13:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:36.816 13:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:36.816 Running I/O for 2 seconds... 00:36:36.816 [2024-07-13 13:48:11.445447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.816 [2024-07-13 13:48:11.445538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.816 [2024-07-13 13:48:11.445584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.816 [2024-07-13 13:48:11.464903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.816 [2024-07-13 13:48:11.464950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.816 [2024-07-13 13:48:11.464978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.816 [2024-07-13 13:48:11.480212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.816 [2024-07-13 13:48:11.480288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.816 [2024-07-13 13:48:11.480336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.816 [2024-07-13 13:48:11.500826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.816 [2024-07-13 13:48:11.500896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.816 [2024-07-13 13:48:11.500942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.816 [2024-07-13 13:48:11.518793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.816 [2024-07-13 13:48:11.518847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.816 [2024-07-13 13:48:11.518894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.816 [2024-07-13 13:48:11.535978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.816 [2024-07-13 13:48:11.536022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.816 [2024-07-13 13:48:11.536050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.816 [2024-07-13 13:48:11.551486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:36.816 [2024-07-13 13:48:11.551543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.816 [2024-07-13 13:48:11.551570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.077 [2024-07-13 13:48:11.571987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.077 [2024-07-13 13:48:11.572047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.077 [2024-07-13 13:48:11.572076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.077 [2024-07-13 13:48:11.590260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.077 [2024-07-13 13:48:11.590303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.077 [2024-07-13 13:48:11.590329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.077 [2024-07-13 13:48:11.605402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.077 [2024-07-13 13:48:11.605442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.077 [2024-07-13 13:48:11.605467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.077 [2024-07-13 13:48:11.622638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.077 [2024-07-13 13:48:11.622678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.077 [2024-07-13 13:48:11.622709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.077 [2024-07-13 13:48:11.639794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.077 [2024-07-13 13:48:11.639850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.077 [2024-07-13 13:48:11.639900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.077 [2024-07-13 13:48:11.657454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.077 [2024-07-13 13:48:11.657500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.077 [2024-07-13 13:48:11.657530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.077 [2024-07-13 13:48:11.672340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.077 [2024-07-13 13:48:11.672387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.077 [2024-07-13 13:48:11.672418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.077 [2024-07-13 13:48:11.693382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.077 [2024-07-13 13:48:11.693431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.077 [2024-07-13 13:48:11.693461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.077 [2024-07-13 13:48:11.712016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.077 [2024-07-13 13:48:11.712060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.077 [2024-07-13 13:48:11.712088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.077 [2024-07-13 13:48:11.727249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.077 [2024-07-13 13:48:11.727291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.077 [2024-07-13 13:48:11.727317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.077 [2024-07-13 13:48:11.745942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.077 [2024-07-13 13:48:11.745985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.077 [2024-07-13 13:48:11.746027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.077 [2024-07-13 13:48:11.761525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.077 [2024-07-13 13:48:11.761567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.077 [2024-07-13 13:48:11.761593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.077 [2024-07-13 13:48:11.781931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.077 [2024-07-13 13:48:11.781993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.077 [2024-07-13 13:48:11.782019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.077 [2024-07-13 13:48:11.803691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.077 [2024-07-13 13:48:11.803739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.077 [2024-07-13 13:48:11.803769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.338 [2024-07-13 13:48:11.827093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.338 [2024-07-13 13:48:11.827138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.338 [2024-07-13 13:48:11.827179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.339 [2024-07-13 13:48:11.841896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.339 [2024-07-13 13:48:11.841955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.339 [2024-07-13 13:48:11.841980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.339 [2024-07-13 13:48:11.861524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.339 [2024-07-13 13:48:11.861566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.339 [2024-07-13 13:48:11.861592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.339 [2024-07-13 13:48:11.881052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.339 [2024-07-13 13:48:11.881111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.339 [2024-07-13 13:48:11.881139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.339 [2024-07-13 13:48:11.896846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.339 [2024-07-13 13:48:11.896893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.339 [2024-07-13 13:48:11.896919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.339 [2024-07-13 13:48:11.915039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.339 [2024-07-13 13:48:11.915081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.339 [2024-07-13 13:48:11.915107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.339 [2024-07-13 13:48:11.934426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.339 [2024-07-13 13:48:11.934477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.339 [2024-07-13 13:48:11.934514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.339 [2024-07-13 13:48:11.953776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.339 [2024-07-13 13:48:11.953825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.339 [2024-07-13 13:48:11.953856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.339 [2024-07-13 13:48:11.968663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.339 [2024-07-13 13:48:11.968712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.339 [2024-07-13 13:48:11.968743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.339 [2024-07-13 13:48:11.987015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.339 [2024-07-13 13:48:11.987056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.339 [2024-07-13 13:48:11.987081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.339 [2024-07-13 13:48:12.004453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.339 [2024-07-13 13:48:12.004493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.339 [2024-07-13 13:48:12.004517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.339 [2024-07-13 13:48:12.020314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.339 [2024-07-13 13:48:12.020358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.339 [2024-07-13 13:48:12.020385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.339 [2024-07-13 13:48:12.039317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.339 [2024-07-13 13:48:12.039358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.339 [2024-07-13 13:48:12.039384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.339 [2024-07-13 13:48:12.054666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.339 [2024-07-13 13:48:12.054715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.339 [2024-07-13 13:48:12.054745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.339 [2024-07-13 13:48:12.072074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.339 [2024-07-13 13:48:12.072118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.339 [2024-07-13 13:48:12.072146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.600 [2024-07-13 13:48:12.089599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.600 [2024-07-13 13:48:12.089649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.600 [2024-07-13 13:48:12.089676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.600 [2024-07-13 13:48:12.106296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.600 [2024-07-13 13:48:12.106341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.600 [2024-07-13 13:48:12.106370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.600 [2024-07-13 13:48:12.121579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.600 [2024-07-13 13:48:12.121642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.600 [2024-07-13 13:48:12.121670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.600 [2024-07-13 13:48:12.139084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.600 [2024-07-13 13:48:12.139129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.600 [2024-07-13 13:48:12.139156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.600 [2024-07-13 13:48:12.155258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.600 [2024-07-13 13:48:12.155317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.600 [2024-07-13 13:48:12.155359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.600 [2024-07-13 13:48:12.175572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.600 [2024-07-13 13:48:12.175617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.600 [2024-07-13 13:48:12.175644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.600 [2024-07-13 13:48:12.190898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.600 [2024-07-13 13:48:12.190937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.600 [2024-07-13 13:48:12.190962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.600 [2024-07-13 13:48:12.207938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.600 [2024-07-13 13:48:12.207986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.600 [2024-07-13 13:48:12.208016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.600 [2024-07-13 13:48:12.225779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.600 [2024-07-13 13:48:12.225821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.600 [2024-07-13 13:48:12.225853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.600 [2024-07-13 13:48:12.241409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.600 [2024-07-13 13:48:12.241450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.600 [2024-07-13 13:48:12.241475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.600 [2024-07-13 13:48:12.260026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.600 [2024-07-13 13:48:12.260067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.600 [2024-07-13 13:48:12.260092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.600 [2024-07-13 13:48:12.275270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.600 [2024-07-13 13:48:12.275318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.600 [2024-07-13 13:48:12.275347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.600 [2024-07-13 13:48:12.296304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.600 [2024-07-13 13:48:12.296353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.600 [2024-07-13 13:48:12.296382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.600 [2024-07-13 13:48:12.318193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.600 [2024-07-13 13:48:12.318249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.600 [2024-07-13 13:48:12.318280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.600 [2024-07-13 13:48:12.336300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.600 [2024-07-13 13:48:12.336356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.600 [2024-07-13 13:48:12.336380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.860 [2024-07-13 13:48:12.351983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.860 [2024-07-13 13:48:12.352027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.860 [2024-07-13 13:48:12.352055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.860 [2024-07-13 13:48:12.369322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.860 [2024-07-13 13:48:12.369370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.860 [2024-07-13 13:48:12.369400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.860 [2024-07-13 13:48:12.385833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.860 [2024-07-13 13:48:12.385887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.860 [2024-07-13 13:48:12.385915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.860 [2024-07-13 13:48:12.404921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.860 [2024-07-13 13:48:12.404964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.860 [2024-07-13 13:48:12.404991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.860 [2024-07-13 13:48:12.420067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.860 [2024-07-13 13:48:12.420108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.860 [2024-07-13 13:48:12.420132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.860 [2024-07-13 13:48:12.439850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.860 [2024-07-13 13:48:12.439905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.860 [2024-07-13 13:48:12.439949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.860 [2024-07-13 13:48:12.457136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.860 [2024-07-13 13:48:12.457194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.860 [2024-07-13 13:48:12.457220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.860 [2024-07-13 13:48:12.474001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.860 [2024-07-13 13:48:12.474042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.860 [2024-07-13 13:48:12.474068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.860 [2024-07-13 13:48:12.490402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.860 [2024-07-13 13:48:12.490458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.860 [2024-07-13 13:48:12.490484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.860 [2024-07-13 13:48:12.505363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.860 [2024-07-13 13:48:12.505410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.860 [2024-07-13 13:48:12.505439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.860 [2024-07-13 13:48:12.524520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.860 [2024-07-13 13:48:12.524568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.860 [2024-07-13 13:48:12.524597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.860 [2024-07-13 13:48:12.541623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.860 [2024-07-13 13:48:12.541678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.860 [2024-07-13 13:48:12.541702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.860 [2024-07-13 13:48:12.562442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.860 [2024-07-13 13:48:12.562501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.860 [2024-07-13 13:48:12.562528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.860 [2024-07-13 13:48:12.577383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.860 [2024-07-13 13:48:12.577426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.860 [2024-07-13 13:48:12.577452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.860 [2024-07-13 13:48:12.598093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:37.860 [2024-07-13 13:48:12.598159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.860 [2024-07-13 13:48:12.598185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.121 [2024-07-13 13:48:12.616380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.121 [2024-07-13 13:48:12.616422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.121 [2024-07-13 13:48:12.616464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.121 [2024-07-13 13:48:12.633320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.121 [2024-07-13 13:48:12.633362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.121 [2024-07-13 13:48:12.633387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.121 [2024-07-13 13:48:12.649828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.121 [2024-07-13 13:48:12.649894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.121 [2024-07-13 13:48:12.649934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.121 [2024-07-13 13:48:12.666462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.121 [2024-07-13 13:48:12.666504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.121 [2024-07-13 13:48:12.666530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.121 [2024-07-13 13:48:12.681974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.121 [2024-07-13 13:48:12.682019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.121 [2024-07-13 13:48:12.682045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.121 [2024-07-13 13:48:12.701035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.121 [2024-07-13 13:48:12.701083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.121 [2024-07-13 13:48:12.701110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.121 [2024-07-13 13:48:12.720311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.121 [2024-07-13 13:48:12.720355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.121 [2024-07-13 13:48:12.720382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.121 [2024-07-13 13:48:12.734984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.121 [2024-07-13 13:48:12.735024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.121 [2024-07-13 13:48:12.735048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.121 [2024-07-13 13:48:12.754452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.121 [2024-07-13 13:48:12.754499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.121 [2024-07-13 13:48:12.754528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.121 [2024-07-13 13:48:12.770142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.121 [2024-07-13 13:48:12.770201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.121 [2024-07-13 13:48:12.770226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.121 [2024-07-13 13:48:12.785478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.121 [2024-07-13 13:48:12.785525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.121 [2024-07-13 13:48:12.785554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.121 [2024-07-13 13:48:12.802331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.121 [2024-07-13 13:48:12.802389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.121 [2024-07-13 13:48:12.802431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.121 [2024-07-13 13:48:12.820121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.121 [2024-07-13 13:48:12.820178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.121 [2024-07-13 13:48:12.820220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.121 [2024-07-13 13:48:12.833768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.121 [2024-07-13 13:48:12.833813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.121 [2024-07-13 13:48:12.833839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.121 [2024-07-13 13:48:12.855129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.121 [2024-07-13 13:48:12.855175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.121 [2024-07-13 13:48:12.855218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.380 [2024-07-13 13:48:12.875176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.380 [2024-07-13 13:48:12.875240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.380 [2024-07-13 13:48:12.875271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.380 [2024-07-13 13:48:12.890324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.380 [2024-07-13 13:48:12.890370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.380 [2024-07-13 13:48:12.890398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.380 [2024-07-13 13:48:12.910539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.380 [2024-07-13 13:48:12.910582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.380 [2024-07-13 13:48:12.910606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.380 [2024-07-13 13:48:12.926307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.380 [2024-07-13 13:48:12.926349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.380 [2024-07-13 13:48:12.926374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.380 [2024-07-13 13:48:12.944476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.380 [2024-07-13 13:48:12.944517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.380 [2024-07-13 13:48:12.944542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.380 [2024-07-13 13:48:12.959164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.380 [2024-07-13 13:48:12.959206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.380 [2024-07-13 13:48:12.959232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.380 [2024-07-13 13:48:12.977502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.380 [2024-07-13 13:48:12.977556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.380 [2024-07-13 13:48:12.977600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.380 [2024-07-13 13:48:12.994864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.380 [2024-07-13 13:48:12.994915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.380 [2024-07-13 13:48:12.994941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.380 [2024-07-13 13:48:13.008918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.380 [2024-07-13 13:48:13.008961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.380 [2024-07-13 13:48:13.008988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.380 [2024-07-13 13:48:13.028397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.380 [2024-07-13 13:48:13.028449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.380 [2024-07-13 13:48:13.028473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.380 [2024-07-13 13:48:13.047209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.380 [2024-07-13 13:48:13.047255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.380 [2024-07-13 13:48:13.047281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.380 [2024-07-13 13:48:13.062108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.380 [2024-07-13 13:48:13.062166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.380 [2024-07-13 13:48:13.062192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.380 [2024-07-13 13:48:13.081003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.380 [2024-07-13 13:48:13.081046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.380 [2024-07-13 13:48:13.081071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.380 [2024-07-13 13:48:13.098732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.380 [2024-07-13 13:48:13.098777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.380 [2024-07-13 13:48:13.098804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.380 [2024-07-13 13:48:13.118102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.380 [2024-07-13 13:48:13.118162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.380 [2024-07-13 13:48:13.118190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.639 [2024-07-13 13:48:13.133547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.639 [2024-07-13 13:48:13.133588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.639 [2024-07-13 13:48:13.133628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.639 [2024-07-13 13:48:13.151758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.639 [2024-07-13 13:48:13.151805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.639 [2024-07-13 13:48:13.151832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.639 [2024-07-13 13:48:13.167740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.639 [2024-07-13 13:48:13.167784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.639 [2024-07-13 13:48:13.167811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.639 [2024-07-13 13:48:13.184485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.639 [2024-07-13 13:48:13.184529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.639 [2024-07-13 13:48:13.184556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.639 [2024-07-13 13:48:13.201065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.639 [2024-07-13 13:48:13.201108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.639 [2024-07-13 13:48:13.201135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.639 [2024-07-13 13:48:13.218620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.639 [2024-07-13 13:48:13.218665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.639 [2024-07-13 13:48:13.218707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.639 [2024-07-13 13:48:13.235793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.639 [2024-07-13 13:48:13.235834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.639 [2024-07-13 13:48:13.235859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.639 [2024-07-13 13:48:13.250129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.639 [2024-07-13 13:48:13.250188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.639 [2024-07-13 13:48:13.250215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.639 [2024-07-13 13:48:13.268335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.639 [2024-07-13 13:48:13.268386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.639 [2024-07-13 13:48:13.268412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.639 [2024-07-13 13:48:13.286111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.639 [2024-07-13 13:48:13.286154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.639 [2024-07-13 13:48:13.286179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.639 [2024-07-13 13:48:13.304655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.639 [2024-07-13 13:48:13.304715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.639 [2024-07-13 13:48:13.304740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.639 [2024-07-13 13:48:13.319951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.639 [2024-07-13 13:48:13.319994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.639 [2024-07-13 13:48:13.320019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.639 [2024-07-13 13:48:13.339284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.639 [2024-07-13 13:48:13.339327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.639 [2024-07-13 13:48:13.339352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.639 [2024-07-13 13:48:13.355500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.639 [2024-07-13 13:48:13.355545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.639 [2024-07-13 13:48:13.355573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.639 [2024-07-13 13:48:13.371133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.639 [2024-07-13 13:48:13.371176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.639 [2024-07-13 13:48:13.371202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.897 [2024-07-13 13:48:13.388376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.897 [2024-07-13 13:48:13.388417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.897 [2024-07-13 13:48:13.388441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.897 [2024-07-13 13:48:13.405530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.897 [2024-07-13 13:48:13.405575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.897 [2024-07-13 13:48:13.405602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.897 [2024-07-13 13:48:13.419758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:38.897 [2024-07-13 13:48:13.419798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.897 [2024-07-13 13:48:13.419823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.897 00:36:38.897 Latency(us) 00:36:38.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.897 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:38.897 nvme0n1 : 2.01 14561.51 56.88 0.00 0.00 8778.96 4636.07 25243.50 00:36:38.897 =================================================================================================================== 00:36:38.897 Total : 14561.51 56.88 0.00 0.00 8778.96 4636.07 25243.50 00:36:38.897 0 00:36:38.897 13:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:38.897 13:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:38.898 13:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:38.898 | .driver_specific 00:36:38.898 | .nvme_error 00:36:38.898 | .status_code 00:36:38.898 | .command_transient_transport_error' 00:36:38.898 13:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:39.157 13:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 114 > 0 )) 00:36:39.157 13:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 450430 00:36:39.157 13:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 450430 ']' 00:36:39.157 13:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 450430 00:36:39.157 13:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:39.157 13:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:39.157 13:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 450430 00:36:39.157 13:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:39.157 13:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:39.157 13:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 450430' 00:36:39.157 killing process with pid 450430 00:36:39.157 13:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 450430 00:36:39.157 Received shutdown signal, test time was about 2.000000 seconds 00:36:39.157 00:36:39.157 Latency(us) 00:36:39.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.157 =================================================================================================================== 00:36:39.157 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:39.157 13:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 450430 00:36:40.093 13:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:40.093 13:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:40.093 13:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:40.093 13:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:40.093 13:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:40.093 13:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=450973 00:36:40.093 13:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:40.093 13:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 450973 /var/tmp/bperf.sock 00:36:40.093 13:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 450973 ']' 00:36:40.093 13:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:40.093 13:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:40.093 13:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:40.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:40.093 13:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:40.093 13:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:40.355 [2024-07-13 13:48:14.843143] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:40.355 [2024-07-13 13:48:14.843295] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450973 ] 00:36:40.355 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:40.355 Zero copy mechanism will not be used. 00:36:40.355 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.355 [2024-07-13 13:48:14.979070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.615 [2024-07-13 13:48:15.233793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.179 13:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:41.179 13:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:41.179 13:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:41.179 13:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:41.438 13:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:41.438 13:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.438 13:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:41.438 13:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.438 13:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:41.438 13:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:42.007 nvme0n1 00:36:42.007 13:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:42.007 13:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.007 13:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:42.007 13:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.007 13:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:42.007 13:48:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:42.007 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:42.007 Zero copy mechanism will not be used. 00:36:42.007 Running I/O for 2 seconds... 00:36:42.007 [2024-07-13 13:48:16.658109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.007 [2024-07-13 13:48:16.658198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.007 [2024-07-13 13:48:16.658251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.007 [2024-07-13 13:48:16.669098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.007 [2024-07-13 13:48:16.669145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.007 [2024-07-13 13:48:16.669183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.007 [2024-07-13 13:48:16.679942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.007 [2024-07-13 13:48:16.679985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.007 [2024-07-13 13:48:16.680010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.007 [2024-07-13 13:48:16.690607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.007 [2024-07-13 13:48:16.690656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.007 [2024-07-13 13:48:16.690686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.007 [2024-07-13 13:48:16.701541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.007 [2024-07-13 13:48:16.701583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.007 [2024-07-13 13:48:16.701609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.007 [2024-07-13 13:48:16.712095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.007 [2024-07-13 13:48:16.712138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.007 [2024-07-13 13:48:16.712163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.007 [2024-07-13 13:48:16.722599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.007 [2024-07-13 13:48:16.722648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.007 [2024-07-13 13:48:16.722678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.007 [2024-07-13 13:48:16.733351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.007 [2024-07-13 13:48:16.733393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.007 [2024-07-13 13:48:16.733418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.007 [2024-07-13 13:48:16.743816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.007 [2024-07-13 13:48:16.743874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.007 [2024-07-13 13:48:16.743901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.754418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.754468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.754497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.765032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.765074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.765098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.775744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.775816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.775845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.786356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.786399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.786424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.796882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.796943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.796968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.807361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.807403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.807428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.817923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.817965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.817990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.828788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.828837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.828884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.839457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.839498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.839522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.850286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.850336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.850365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.860921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.860962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.860986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.871501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.871543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.871567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.882255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.882297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.882321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.892765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.892815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.892844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.903508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.903556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.903584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.914186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.914227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.914252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.924992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.925041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.925066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.935543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.935585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.935609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.946146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.946188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.946232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.957001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.957043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.957068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.967421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.967463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.967487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.978195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.978244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.978273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.989003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.989045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.989070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:16.999528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:16.999569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.269 [2024-07-13 13:48:16.999594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.269 [2024-07-13 13:48:17.010016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.269 [2024-07-13 13:48:17.010059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.270 [2024-07-13 13:48:17.010091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.529 [2024-07-13 13:48:17.020702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.529 [2024-07-13 13:48:17.020743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.529 [2024-07-13 13:48:17.020767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.529 [2024-07-13 13:48:17.031427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.529 [2024-07-13 13:48:17.031475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.529 [2024-07-13 13:48:17.031504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.529 [2024-07-13 13:48:17.042346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.529 [2024-07-13 13:48:17.042395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.529 [2024-07-13 13:48:17.042424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.529 [2024-07-13 13:48:17.053095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.529 [2024-07-13 13:48:17.053136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.529 [2024-07-13 13:48:17.053162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.529 [2024-07-13 13:48:17.063689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.529 [2024-07-13 13:48:17.063731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.529 [2024-07-13 13:48:17.063755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.529 [2024-07-13 13:48:17.074415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.529 [2024-07-13 13:48:17.074464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.529 [2024-07-13 13:48:17.074493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.529 [2024-07-13 13:48:17.085399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.529 [2024-07-13 13:48:17.085440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.529 [2024-07-13 13:48:17.085466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.529 [2024-07-13 13:48:17.095877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.529 [2024-07-13 13:48:17.095924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.529 [2024-07-13 13:48:17.095965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.529 [2024-07-13 13:48:17.106523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.529 [2024-07-13 13:48:17.106572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.529 [2024-07-13 13:48:17.106598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.529 [2024-07-13 13:48:17.117314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.529 [2024-07-13 13:48:17.117363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.529 [2024-07-13 13:48:17.117392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.529 [2024-07-13 13:48:17.128052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.529 [2024-07-13 13:48:17.128095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.530 [2024-07-13 13:48:17.128120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.530 [2024-07-13 13:48:17.138637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.530 [2024-07-13 13:48:17.138686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.530 [2024-07-13 13:48:17.138715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.530 [2024-07-13 13:48:17.149408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.530 [2024-07-13 13:48:17.149457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.530 [2024-07-13 13:48:17.149485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.530 [2024-07-13 13:48:17.160068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.530 [2024-07-13 13:48:17.160112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.530 [2024-07-13 13:48:17.160138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.530 [2024-07-13 13:48:17.170481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.530 [2024-07-13 13:48:17.170521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.530 [2024-07-13 13:48:17.170546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.530 [2024-07-13 13:48:17.181098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.530 [2024-07-13 13:48:17.181140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.530 [2024-07-13 13:48:17.181164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.530 [2024-07-13 13:48:17.191545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.530 [2024-07-13 13:48:17.191593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.530 [2024-07-13 13:48:17.191630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.530 [2024-07-13 13:48:17.202621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.530 [2024-07-13 13:48:17.202665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.530 [2024-07-13 13:48:17.202690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.530 [2024-07-13 13:48:17.214009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.530 [2024-07-13 13:48:17.214057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.530 [2024-07-13 13:48:17.214087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.530 [2024-07-13 13:48:17.224942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.530 [2024-07-13 13:48:17.224982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.530 [2024-07-13 13:48:17.225007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.530 [2024-07-13 13:48:17.235410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.530 [2024-07-13 13:48:17.235457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.530 [2024-07-13 13:48:17.235487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.530 [2024-07-13 13:48:17.245804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.530 [2024-07-13 13:48:17.245847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.530 [2024-07-13 13:48:17.245880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.530 [2024-07-13 13:48:17.256321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.530 [2024-07-13 13:48:17.256362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.530 [2024-07-13 13:48:17.256386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.530 [2024-07-13 13:48:17.266732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.530 [2024-07-13 13:48:17.266773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.530 [2024-07-13 13:48:17.266803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.789 [2024-07-13 13:48:17.277300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.789 [2024-07-13 13:48:17.277343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.789 [2024-07-13 13:48:17.277368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.789 [2024-07-13 13:48:17.287947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.287994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.288019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.298570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.298617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.298646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.309445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.309486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.309511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.320117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.320159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.320183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.330683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.330732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.330762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.341260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.341300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.341324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.351815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.351862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.351904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.362431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.362472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.362498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.372879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.372941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.372966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.383371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.383412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.383437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.393830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.393878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.393905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.404512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.404561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.404589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.415052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.415094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.415119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.425537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.425578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.425603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.435927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.435969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.435993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.446347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.446389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.446415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.456907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.456967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.456992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.467304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.467353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.467378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.477967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.478008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.478033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.488498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.488538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.488562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.499218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.499258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.499302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.509997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.510040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.510066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.520631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.520679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.520707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.790 [2024-07-13 13:48:17.531384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.790 [2024-07-13 13:48:17.531432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.790 [2024-07-13 13:48:17.531461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.542453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.542494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.542518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.553051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.553090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.553114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.563813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.563859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.563901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.574500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.574543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.574568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.585142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.585182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.585206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.595475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.595516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.595540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.606144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.606184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.606209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.616765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.616806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.616829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.627414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.627457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.627481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.638209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.638250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.638274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.648781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.648827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.648876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.659350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.659391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.659415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.669850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.669907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.669950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.680612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.680654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.680694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.691272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.691320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.691349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.702003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.702043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.702076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.712751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.712808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.712837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.723536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.723588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.723612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.734312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.734369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.734398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.745225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.745290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.745316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.756361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.756417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.756446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.767501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.767548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.767577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.778593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.778640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.778669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.050 [2024-07-13 13:48:17.789601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.050 [2024-07-13 13:48:17.789656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.050 [2024-07-13 13:48:17.789686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.800984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.801025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.801058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.812042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.812083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.812117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.823206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.823264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.823293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.834291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.834358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.834387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.845351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.845398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.845434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.856604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.856661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.856690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.867703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.867750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.867786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.878713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.878771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.878800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.889828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.889890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.889947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.901038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.901088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.901113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.912088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.912139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.912163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.923160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.923233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.923262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.934243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.934299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.934328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.945419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.945477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.945507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.956655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.956711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.956740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.967543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.967602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.967631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.978619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.978676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.978705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:17.989894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:17.989955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:17.989988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:18.001034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.309 [2024-07-13 13:48:18.001087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.309 [2024-07-13 13:48:18.001112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.309 [2024-07-13 13:48:18.012124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.310 [2024-07-13 13:48:18.012190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.310 [2024-07-13 13:48:18.012220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.310 [2024-07-13 13:48:18.023071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.310 [2024-07-13 13:48:18.023130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.310 [2024-07-13 13:48:18.023172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.310 [2024-07-13 13:48:18.034141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.310 [2024-07-13 13:48:18.034208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.310 [2024-07-13 13:48:18.034238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.310 [2024-07-13 13:48:18.045259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.310 [2024-07-13 13:48:18.045316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.310 [2024-07-13 13:48:18.045344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.056476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.056531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.056560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.068052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.068093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.068117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.079079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.079131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.079156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.089790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.089846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.089886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.100723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.100777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.100806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.111798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.111855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.111897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.122703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.122759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.122789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.133724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.133783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.133811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.144710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.144765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.144794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.155768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.155842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.155879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.166819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.166883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.166929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.177907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.177978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.178003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.188986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.189027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.189051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.199927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.199979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.200003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.210936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.210993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.211019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.221977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.222027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.222051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.233156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.233228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.233257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.244185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.244240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.244270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.255335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.255393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.255422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.266427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.266484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.266513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.277784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.277842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.277892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.288882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.288944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.288969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.299963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.300004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.300028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.569 [2024-07-13 13:48:18.311194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.569 [2024-07-13 13:48:18.311248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-07-13 13:48:18.311276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.322775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.322836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.322875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.333967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.334009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.334034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.345109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.345179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.345204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.356429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.356478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.356508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.367738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.367796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.367825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.378920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.378965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.378991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.389999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.390041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.390068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.401006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.401055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.401081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.412086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.412127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.412151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.423309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.423365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.423394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.434479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.434536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.434565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.445670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.445726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.445754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.456723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.456779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.456807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.467689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.467747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.467776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.478693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.478747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.478776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.489718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.489764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.489793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.500658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.500717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.500747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.511554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.511599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.511627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.522462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.522517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.522546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.533257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.533313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.533342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.544197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.544252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.544281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.555322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.555379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.555407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.829 [2024-07-13 13:48:18.566312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.829 [2024-07-13 13:48:18.566367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.829 [2024-07-13 13:48:18.566395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:44.088 [2024-07-13 13:48:18.577418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.088 [2024-07-13 13:48:18.577477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.088 [2024-07-13 13:48:18.577506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:44.088 [2024-07-13 13:48:18.588234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.088 [2024-07-13 13:48:18.588299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.088 [2024-07-13 13:48:18.588328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:44.088 [2024-07-13 13:48:18.599142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.088 [2024-07-13 13:48:18.599212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.088 [2024-07-13 13:48:18.599241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.088 [2024-07-13 13:48:18.610399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.088 [2024-07-13 13:48:18.610454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.088 [2024-07-13 13:48:18.610483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:44.088 [2024-07-13 13:48:18.621345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.088 [2024-07-13 13:48:18.621402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.088 [2024-07-13 13:48:18.621431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:44.088 [2024-07-13 13:48:18.632476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.088 [2024-07-13 13:48:18.632522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.088 [2024-07-13 13:48:18.632556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:44.088 [2024-07-13 13:48:18.643283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.088 [2024-07-13 13:48:18.643352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.088 [2024-07-13 13:48:18.643377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.088 00:36:44.088 Latency(us) 00:36:44.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.088 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:44.088 nvme0n1 : 2.00 2849.82 356.23 0.00 0.00 5605.17 5048.70 15146.10 00:36:44.088 =================================================================================================================== 00:36:44.089 Total : 2849.82 356.23 0.00 0.00 5605.17 5048.70 15146.10 00:36:44.089 0 00:36:44.089 13:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:44.089 13:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:44.089 13:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:44.089 13:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:44.089 | .driver_specific 00:36:44.089 | .nvme_error 00:36:44.089 | .status_code 00:36:44.089 | .command_transient_transport_error' 00:36:44.349 13:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 184 > 0 )) 00:36:44.349 13:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 450973 00:36:44.349 13:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 450973 ']' 00:36:44.349 13:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 450973 00:36:44.349 13:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:44.349 13:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:44.349 13:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 450973 00:36:44.349 13:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:44.349 13:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:44.349 13:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 450973' 00:36:44.349 killing process with pid 450973 00:36:44.349 13:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 450973 00:36:44.349 Received shutdown signal, test time was about 2.000000 seconds 00:36:44.349 00:36:44.349 Latency(us) 00:36:44.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.349 =================================================================================================================== 00:36:44.349 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:44.349 13:48:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 450973 00:36:45.287 13:48:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:45.287 13:48:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:45.287 13:48:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:45.287 13:48:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:45.287 13:48:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:45.287 13:48:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=451633 00:36:45.287 13:48:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:45.287 13:48:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 451633 /var/tmp/bperf.sock 00:36:45.287 13:48:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 451633 ']' 00:36:45.287 13:48:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:45.287 13:48:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:45.287 13:48:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:45.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:45.287 13:48:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:45.287 13:48:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:45.547 [2024-07-13 13:48:20.068326] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:45.547 [2024-07-13 13:48:20.068560] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451633 ] 00:36:45.547 EAL: No free 2048 kB hugepages reported on node 1 00:36:45.547 [2024-07-13 13:48:20.204152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:45.807 [2024-07-13 13:48:20.441283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:46.374 13:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:46.374 13:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:46.374 13:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:46.374 13:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:46.661 13:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:46.661 13:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.661 13:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:46.661 13:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.661 13:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:46.661 13:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:47.235 nvme0n1 00:36:47.235 13:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:47.235 13:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.235 13:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.235 13:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.235 13:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:47.235 13:48:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:47.235 Running I/O for 2 seconds... 00:36:47.236 [2024-07-13 13:48:21.957080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:36:47.236 [2024-07-13 13:48:21.958522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.236 [2024-07-13 13:48:21.958584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:47.236 [2024-07-13 13:48:21.972597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:36:47.236 [2024-07-13 13:48:21.973898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.236 [2024-07-13 13:48:21.973959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:47.497 [2024-07-13 13:48:21.991151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb480 00:36:47.497 [2024-07-13 13:48:21.992666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.497 [2024-07-13 13:48:21.992713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:47.497 [2024-07-13 13:48:22.007814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:36:47.497 [2024-07-13 13:48:22.009533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.497 [2024-07-13 13:48:22.009579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:47.497 [2024-07-13 13:48:22.024067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:36:47.497 [2024-07-13 13:48:22.025828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.497 [2024-07-13 13:48:22.025882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:47.497 [2024-07-13 13:48:22.040265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2510 00:36:47.497 [2024-07-13 13:48:22.042005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.497 [2024-07-13 13:48:22.042047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:47.497 [2024-07-13 13:48:22.055116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:36:47.497 [2024-07-13 13:48:22.056814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.497 [2024-07-13 13:48:22.056858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:47.497 [2024-07-13 13:48:22.069976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:36:47.497 [2024-07-13 13:48:22.071091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.497 [2024-07-13 13:48:22.071131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:47.497 [2024-07-13 13:48:22.086137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195efae0 00:36:47.497 [2024-07-13 13:48:22.087035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.497 [2024-07-13 13:48:22.087076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:47.497 [2024-07-13 13:48:22.104325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:36:47.497 [2024-07-13 13:48:22.106411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.497 [2024-07-13 13:48:22.106456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:47.497 [2024-07-13 13:48:22.118976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:36:47.497 [2024-07-13 13:48:22.120453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.497 [2024-07-13 13:48:22.120499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:47.497 [2024-07-13 13:48:22.136451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:36:47.497 [2024-07-13 13:48:22.138795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.497 [2024-07-13 13:48:22.138841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:47.497 [2024-07-13 13:48:22.151449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:36:47.497 [2024-07-13 13:48:22.153235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.497 [2024-07-13 13:48:22.153286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:47.497 [2024-07-13 13:48:22.165409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:36:47.497 [2024-07-13 13:48:22.167925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.497 [2024-07-13 13:48:22.167965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:47.497 [2024-07-13 13:48:22.179597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:36:47.497 [2024-07-13 13:48:22.180636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.497 [2024-07-13 13:48:22.180692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:47.497 [2024-07-13 13:48:22.195201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:36:47.497 [2024-07-13 13:48:22.196427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.497 [2024-07-13 13:48:22.196483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:47.497 [2024-07-13 13:48:22.209366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:36:47.497 [2024-07-13 13:48:22.210578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.497 [2024-07-13 13:48:22.210633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:47.497 [2024-07-13 13:48:22.226032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:36:47.497 [2024-07-13 13:48:22.227521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.497 [2024-07-13 13:48:22.227576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.241616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:36:47.758 [2024-07-13 13:48:22.243328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.243383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.255988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:36:47.758 [2024-07-13 13:48:22.257488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.257526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.269874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:36:47.758 [2024-07-13 13:48:22.270899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.270939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.284784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de470 00:36:47.758 [2024-07-13 13:48:22.285771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.285826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.300190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e73e0 00:36:47.758 [2024-07-13 13:48:22.301364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.301404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.315602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5a90 00:36:47.758 [2024-07-13 13:48:22.316851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.316899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.330843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:36:47.758 [2024-07-13 13:48:22.331856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.331915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.347862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8a50 00:36:47.758 [2024-07-13 13:48:22.350012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.350053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.361449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:36:47.758 [2024-07-13 13:48:22.363026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.363067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.375147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f46d0 00:36:47.758 [2024-07-13 13:48:22.377558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.377598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.389254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ddc00 00:36:47.758 [2024-07-13 13:48:22.390261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.390299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.404740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f81e0 00:36:47.758 [2024-07-13 13:48:22.405968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.406008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.419960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed920 00:36:47.758 [2024-07-13 13:48:22.421119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.421159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.435313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:47.758 [2024-07-13 13:48:22.436659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.436698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.449669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1430 00:36:47.758 [2024-07-13 13:48:22.451027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.451068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.466597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:36:47.758 [2024-07-13 13:48:22.468225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.468281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.481805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:36:47.758 [2024-07-13 13:48:22.483412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.483467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:47.758 [2024-07-13 13:48:22.497450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:36:47.758 [2024-07-13 13:48:22.499274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:47.758 [2024-07-13 13:48:22.499313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:48.019 [2024-07-13 13:48:22.510183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:36:48.019 [2024-07-13 13:48:22.511093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.019 [2024-07-13 13:48:22.511132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:48.019 [2024-07-13 13:48:22.525541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:36:48.019 [2024-07-13 13:48:22.526411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.019 [2024-07-13 13:48:22.526452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:48.019 [2024-07-13 13:48:22.540748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:48.019 [2024-07-13 13:48:22.541967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.019 [2024-07-13 13:48:22.542016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:48.019 [2024-07-13 13:48:22.555812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa3a0 00:36:48.019 [2024-07-13 13:48:22.557248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.019 [2024-07-13 13:48:22.557302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:48.019 [2024-07-13 13:48:22.569895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fbcf0 00:36:48.019 [2024-07-13 13:48:22.571231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.019 [2024-07-13 13:48:22.571269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.019 [2024-07-13 13:48:22.586464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7970 00:36:48.019 [2024-07-13 13:48:22.588044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.019 [2024-07-13 13:48:22.588084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:48.019 [2024-07-13 13:48:22.601698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee190 00:36:48.019 [2024-07-13 13:48:22.603447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.019 [2024-07-13 13:48:22.603501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:48.019 [2024-07-13 13:48:22.615767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:36:48.020 [2024-07-13 13:48:22.617559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.020 [2024-07-13 13:48:22.617614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.020 [2024-07-13 13:48:22.629659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:36:48.020 [2024-07-13 13:48:22.630826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.020 [2024-07-13 13:48:22.630874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:48.020 [2024-07-13 13:48:22.644886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:36:48.020 [2024-07-13 13:48:22.645991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.020 [2024-07-13 13:48:22.646031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.020 [2024-07-13 13:48:22.660163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc560 00:36:48.020 [2024-07-13 13:48:22.661519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.020 [2024-07-13 13:48:22.661559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.020 [2024-07-13 13:48:22.675474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:36:48.020 [2024-07-13 13:48:22.676698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.020 [2024-07-13 13:48:22.676739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:48.020 [2024-07-13 13:48:22.690939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebfd0 00:36:48.020 [2024-07-13 13:48:22.692596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.020 [2024-07-13 13:48:22.692652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:48.020 [2024-07-13 13:48:22.706469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:36:48.020 [2024-07-13 13:48:22.708201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.020 [2024-07-13 13:48:22.708241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:48.020 [2024-07-13 13:48:22.720798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:48.020 [2024-07-13 13:48:22.722533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.020 [2024-07-13 13:48:22.722588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.020 [2024-07-13 13:48:22.734872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:36:48.020 [2024-07-13 13:48:22.735965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.020 [2024-07-13 13:48:22.736010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:48.020 [2024-07-13 13:48:22.751696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:36:48.020 [2024-07-13 13:48:22.753619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.020 [2024-07-13 13:48:22.753658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:48.294 [2024-07-13 13:48:22.765937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:36:48.294 [2024-07-13 13:48:22.767447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.294 [2024-07-13 13:48:22.767488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.294 [2024-07-13 13:48:22.781289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:36:48.294 [2024-07-13 13:48:22.782461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.295 [2024-07-13 13:48:22.782501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:48.295 [2024-07-13 13:48:22.798289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec840 00:36:48.295 [2024-07-13 13:48:22.800556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.295 [2024-07-13 13:48:22.800620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:48.295 [2024-07-13 13:48:22.811825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:36:48.295 [2024-07-13 13:48:22.813624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.295 [2024-07-13 13:48:22.813678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.295 [2024-07-13 13:48:22.825400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7100 00:36:48.295 [2024-07-13 13:48:22.827937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.295 [2024-07-13 13:48:22.827977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:48.295 [2024-07-13 13:48:22.839263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4140 00:36:48.295 [2024-07-13 13:48:22.840403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.295 [2024-07-13 13:48:22.840458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:48.295 [2024-07-13 13:48:22.854644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:36:48.295 [2024-07-13 13:48:22.855951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.295 [2024-07-13 13:48:22.855990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:48.295 [2024-07-13 13:48:22.868716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:36:48.295 [2024-07-13 13:48:22.870028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.295 [2024-07-13 13:48:22.870068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.295 [2024-07-13 13:48:22.885479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:36:48.295 [2024-07-13 13:48:22.887061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.295 [2024-07-13 13:48:22.887101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:48.295 [2024-07-13 13:48:22.899799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:36:48.296 [2024-07-13 13:48:22.901327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.296 [2024-07-13 13:48:22.901392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:48.296 [2024-07-13 13:48:22.916660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:36:48.296 [2024-07-13 13:48:22.918480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.296 [2024-07-13 13:48:22.918518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:48.296 [2024-07-13 13:48:22.932312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:36:48.296 [2024-07-13 13:48:22.934203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.296 [2024-07-13 13:48:22.934241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:48.296 [2024-07-13 13:48:22.946588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:36:48.296 [2024-07-13 13:48:22.948433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.296 [2024-07-13 13:48:22.948471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:48.296 [2024-07-13 13:48:22.960422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:36:48.296 [2024-07-13 13:48:22.961743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.296 [2024-07-13 13:48:22.961781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.296 [2024-07-13 13:48:22.975368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de470 00:36:48.296 [2024-07-13 13:48:22.976742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.296 [2024-07-13 13:48:22.976783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.296 [2024-07-13 13:48:22.991181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0350 00:36:48.296 [2024-07-13 13:48:22.992563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.296 [2024-07-13 13:48:22.992618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:48.296 [2024-07-13 13:48:23.006443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec840 00:36:48.296 [2024-07-13 13:48:23.007816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.297 [2024-07-13 13:48:23.007856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:48.297 [2024-07-13 13:48:23.023783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:36:48.297 [2024-07-13 13:48:23.026141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.297 [2024-07-13 13:48:23.026203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:48.559 [2024-07-13 13:48:23.038071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:36:48.559 [2024-07-13 13:48:23.039892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.559 [2024-07-13 13:48:23.039937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:48.559 [2024-07-13 13:48:23.052027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:36:48.559 [2024-07-13 13:48:23.054628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.559 [2024-07-13 13:48:23.054668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.559 [2024-07-13 13:48:23.066086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:36:48.559 [2024-07-13 13:48:23.067229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.559 [2024-07-13 13:48:23.067268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:48.559 [2024-07-13 13:48:23.081485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7970 00:36:48.559 [2024-07-13 13:48:23.082804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.559 [2024-07-13 13:48:23.082843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:48.559 [2024-07-13 13:48:23.095585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:36:48.559 [2024-07-13 13:48:23.096944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.559 [2024-07-13 13:48:23.096984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:48.559 [2024-07-13 13:48:23.112217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:36:48.559 [2024-07-13 13:48:23.113741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.559 [2024-07-13 13:48:23.113782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:48.559 [2024-07-13 13:48:23.127365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:36:48.560 [2024-07-13 13:48:23.128819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.560 [2024-07-13 13:48:23.128859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.560 [2024-07-13 13:48:23.144525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:36:48.560 [2024-07-13 13:48:23.146924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.560 [2024-07-13 13:48:23.146964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:48.560 [2024-07-13 13:48:23.155061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:36:48.560 [2024-07-13 13:48:23.156207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.560 [2024-07-13 13:48:23.156261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:48.560 [2024-07-13 13:48:23.172094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f81e0 00:36:48.560 [2024-07-13 13:48:23.173909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.560 [2024-07-13 13:48:23.173950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:48.560 [2024-07-13 13:48:23.185743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:36:48.560 [2024-07-13 13:48:23.187055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.560 [2024-07-13 13:48:23.187103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:48.560 [2024-07-13 13:48:23.200707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc560 00:36:48.560 [2024-07-13 13:48:23.201927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.560 [2024-07-13 13:48:23.201968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:48.560 [2024-07-13 13:48:23.218069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:36:48.560 [2024-07-13 13:48:23.220240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.560 [2024-07-13 13:48:23.220279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:48.560 [2024-07-13 13:48:23.231873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaef0 00:36:48.560 [2024-07-13 13:48:23.233628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.560 [2024-07-13 13:48:23.233669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:48.560 [2024-07-13 13:48:23.245471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:36:48.560 [2024-07-13 13:48:23.248032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.560 [2024-07-13 13:48:23.248072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:48.560 [2024-07-13 13:48:23.259252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2510 00:36:48.560 [2024-07-13 13:48:23.260400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.560 [2024-07-13 13:48:23.260453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:48.560 [2024-07-13 13:48:23.274564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:36:48.560 [2024-07-13 13:48:23.275784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.560 [2024-07-13 13:48:23.275823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:48.560 [2024-07-13 13:48:23.289745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:48.560 [2024-07-13 13:48:23.291057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.560 [2024-07-13 13:48:23.291097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:48.819 [2024-07-13 13:48:23.305450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:36:48.819 [2024-07-13 13:48:23.307005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.819 [2024-07-13 13:48:23.307045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:48.819 [2024-07-13 13:48:23.322880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:36:48.819 [2024-07-13 13:48:23.325175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.819 [2024-07-13 13:48:23.325230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:48.819 [2024-07-13 13:48:23.336534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:36:48.819 [2024-07-13 13:48:23.338255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.819 [2024-07-13 13:48:23.338295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:48.819 [2024-07-13 13:48:23.353441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:36:48.819 [2024-07-13 13:48:23.356162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.819 [2024-07-13 13:48:23.356217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:48.819 [2024-07-13 13:48:23.364930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:36:48.819 [2024-07-13 13:48:23.366085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.819 [2024-07-13 13:48:23.366123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:48.819 [2024-07-13 13:48:23.383089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:36:48.819 [2024-07-13 13:48:23.385090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.819 [2024-07-13 13:48:23.385130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:48.819 [2024-07-13 13:48:23.398032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:36:48.819 [2024-07-13 13:48:23.399385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.819 [2024-07-13 13:48:23.399429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:48.819 [2024-07-13 13:48:23.414090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:36:48.819 [2024-07-13 13:48:23.415327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.819 [2024-07-13 13:48:23.415366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:48.819 [2024-07-13 13:48:23.432373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:36:48.819 [2024-07-13 13:48:23.434781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.819 [2024-07-13 13:48:23.434824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:48.819 [2024-07-13 13:48:23.447089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fef90 00:36:48.819 [2024-07-13 13:48:23.448924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.819 [2024-07-13 13:48:23.448984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:48.819 [2024-07-13 13:48:23.461416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:36:48.819 [2024-07-13 13:48:23.464018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.820 [2024-07-13 13:48:23.464057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:48.820 [2024-07-13 13:48:23.476105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd640 00:36:48.820 [2024-07-13 13:48:23.477281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.820 [2024-07-13 13:48:23.477325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:48.820 [2024-07-13 13:48:23.492376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:36:48.820 [2024-07-13 13:48:23.493713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.820 [2024-07-13 13:48:23.493766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:48.820 [2024-07-13 13:48:23.507171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:36:48.820 [2024-07-13 13:48:23.508544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.820 [2024-07-13 13:48:23.508586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:48.820 [2024-07-13 13:48:23.524770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:36:48.820 [2024-07-13 13:48:23.526398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.820 [2024-07-13 13:48:23.526441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:48.820 [2024-07-13 13:48:23.540834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0ea0 00:36:48.820 [2024-07-13 13:48:23.542301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.820 [2024-07-13 13:48:23.542345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:48.820 [2024-07-13 13:48:23.558946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc560 00:36:48.820 [2024-07-13 13:48:23.561527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:48.820 [2024-07-13 13:48:23.561564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:49.080 [2024-07-13 13:48:23.570061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:36:49.080 [2024-07-13 13:48:23.571137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.080 [2024-07-13 13:48:23.571181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:49.080 [2024-07-13 13:48:23.584027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:36:49.080 [2024-07-13 13:48:23.585094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.080 [2024-07-13 13:48:23.585132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:49.080 [2024-07-13 13:48:23.601183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:36:49.080 [2024-07-13 13:48:23.602599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.080 [2024-07-13 13:48:23.602654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:49.080 [2024-07-13 13:48:23.617806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e73e0 00:36:49.080 [2024-07-13 13:48:23.619420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.080 [2024-07-13 13:48:23.619475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:49.080 [2024-07-13 13:48:23.632952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:36:49.080 [2024-07-13 13:48:23.634543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.080 [2024-07-13 13:48:23.634587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:49.080 [2024-07-13 13:48:23.651269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:36:49.080 [2024-07-13 13:48:23.653212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.080 [2024-07-13 13:48:23.653272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:49.080 [2024-07-13 13:48:23.668016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6890 00:36:49.080 [2024-07-13 13:48:23.670017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.080 [2024-07-13 13:48:23.670056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:49.080 [2024-07-13 13:48:23.683196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:36:49.080 [2024-07-13 13:48:23.685262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.080 [2024-07-13 13:48:23.685306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:49.080 [2024-07-13 13:48:23.698091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:36:49.080 [2024-07-13 13:48:23.699447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.080 [2024-07-13 13:48:23.699490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:49.080 [2024-07-13 13:48:23.716193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0630 00:36:49.080 [2024-07-13 13:48:23.718427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.080 [2024-07-13 13:48:23.718472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:49.080 [2024-07-13 13:48:23.731146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:36:49.080 [2024-07-13 13:48:23.732735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.080 [2024-07-13 13:48:23.732778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:49.080 [2024-07-13 13:48:23.747466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:36:49.080 [2024-07-13 13:48:23.748886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.080 [2024-07-13 13:48:23.748944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:49.080 [2024-07-13 13:48:23.765520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8d30 00:36:49.080 [2024-07-13 13:48:23.768195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.080 [2024-07-13 13:48:23.768238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:49.080 [2024-07-13 13:48:23.776745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:36:49.080 [2024-07-13 13:48:23.777900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.080 [2024-07-13 13:48:23.777955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:49.080 [2024-07-13 13:48:23.794664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fcdd0 00:36:49.080 [2024-07-13 13:48:23.796646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.080 [2024-07-13 13:48:23.796697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:49.080 [2024-07-13 13:48:23.809327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:36:49.080 [2024-07-13 13:48:23.810625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.080 [2024-07-13 13:48:23.810668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:49.339 [2024-07-13 13:48:23.827071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:36:49.339 [2024-07-13 13:48:23.829351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.339 [2024-07-13 13:48:23.829395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:49.339 [2024-07-13 13:48:23.841783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f57b0 00:36:49.339 [2024-07-13 13:48:23.843311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.339 [2024-07-13 13:48:23.843355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:49.339 [2024-07-13 13:48:23.857755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fdeb0 00:36:49.339 [2024-07-13 13:48:23.859222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.339 [2024-07-13 13:48:23.859281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:49.339 [2024-07-13 13:48:23.875797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:36:49.339 [2024-07-13 13:48:23.878353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.339 [2024-07-13 13:48:23.878397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:49.339 [2024-07-13 13:48:23.886952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc128 00:36:49.339 [2024-07-13 13:48:23.888071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.339 [2024-07-13 13:48:23.888109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:49.339 [2024-07-13 13:48:23.901811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1430 00:36:49.339 [2024-07-13 13:48:23.902941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.339 [2024-07-13 13:48:23.902979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:49.339 [2024-07-13 13:48:23.919430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc560 00:36:49.339 [2024-07-13 13:48:23.920761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.339 [2024-07-13 13:48:23.920804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:49.339 [2024-07-13 13:48:23.935629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fbcf0 00:36:49.339 [2024-07-13 13:48:23.937240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:49.339 [2024-07-13 13:48:23.937283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:49.339 00:36:49.339 Latency(us) 00:36:49.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.339 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:49.339 nvme0n1 : 2.01 16615.09 64.90 0.00 0.00 7688.34 3276.80 19418.07 00:36:49.339 =================================================================================================================== 00:36:49.339 Total : 16615.09 64.90 0.00 0.00 7688.34 3276.80 19418.07 00:36:49.339 0 00:36:49.339 13:48:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:49.339 13:48:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:49.339 13:48:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:49.339 | .driver_specific 00:36:49.339 | .nvme_error 00:36:49.339 | .status_code 00:36:49.339 | .command_transient_transport_error' 00:36:49.339 13:48:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:49.599 13:48:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 130 > 0 )) 00:36:49.599 13:48:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 451633 00:36:49.599 13:48:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 451633 ']' 00:36:49.599 13:48:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 451633 00:36:49.599 13:48:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:49.599 13:48:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:49.599 13:48:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 451633 00:36:49.599 13:48:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:49.599 13:48:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:49.599 13:48:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 451633' 00:36:49.599 killing process with pid 451633 00:36:49.599 13:48:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 451633 00:36:49.599 Received shutdown signal, test time was about 2.000000 seconds 00:36:49.599 00:36:49.599 Latency(us) 00:36:49.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.599 =================================================================================================================== 00:36:49.599 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:49.599 13:48:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 451633 00:36:50.975 13:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:50.975 13:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:50.975 13:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:50.975 13:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:50.975 13:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:50.975 13:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=452293 00:36:50.975 13:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:50.975 13:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 452293 /var/tmp/bperf.sock 00:36:50.975 13:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 452293 ']' 00:36:50.975 13:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:50.975 13:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:50.975 13:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:50.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:50.975 13:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:50.975 13:48:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:50.975 [2024-07-13 13:48:25.384222] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:50.975 [2024-07-13 13:48:25.384372] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452293 ] 00:36:50.975 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:50.975 Zero copy mechanism will not be used. 00:36:50.975 EAL: No free 2048 kB hugepages reported on node 1 00:36:50.975 [2024-07-13 13:48:25.506681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.235 [2024-07-13 13:48:25.750884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.804 13:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:51.804 13:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:51.804 13:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:51.804 13:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:51.804 13:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:52.063 13:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.063 13:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:52.063 13:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.063 13:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:52.063 13:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:52.323 nvme0n1 00:36:52.323 13:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:52.323 13:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.323 13:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:52.323 13:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.323 13:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:52.323 13:48:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:52.323 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:52.323 Zero copy mechanism will not be used. 00:36:52.323 Running I/O for 2 seconds... 00:36:52.323 [2024-07-13 13:48:27.046226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.323 [2024-07-13 13:48:27.046776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.323 [2024-07-13 13:48:27.046834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.323 [2024-07-13 13:48:27.061443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.323 [2024-07-13 13:48:27.061950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.323 [2024-07-13 13:48:27.061993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.077634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.078100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.078144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.091285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.091796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.091876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.104594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.104832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.104887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.118746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.119238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.119286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.132821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.133076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.133118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.146504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.146984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.147027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.159950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.160493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.160540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.173187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.173640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.173702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.186593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.187159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.187220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.199901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.200423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.200470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.212303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.212782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.212828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.226310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.226747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.226808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.239767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.240235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.240294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.251659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.252143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.252199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.265245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.265693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.265733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.278821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.279388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.279435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.292008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.292448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.292489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.305256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.305813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.305860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.583 [2024-07-13 13:48:27.319616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.583 [2024-07-13 13:48:27.320096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.583 [2024-07-13 13:48:27.320138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.333928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.334385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.334426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.346430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.346862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.346929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.359315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.359749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.359809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.373040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.373544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.373606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.385944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.386462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.386507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.398493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.398950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.398992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.411226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.411615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.411655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.424144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.424583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.424639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.437453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.437907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.437966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.450428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.450931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.450978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.463395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.463853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.463909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.475619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.476057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.476099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.488577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.488995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.489037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.502281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.502712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.502753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.515284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.515733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.515794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.529337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.529753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.529812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.542496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.542977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.543023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.555774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.556207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.844 [2024-07-13 13:48:27.556249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.844 [2024-07-13 13:48:27.569333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.844 [2024-07-13 13:48:27.569754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.845 [2024-07-13 13:48:27.569794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.845 [2024-07-13 13:48:27.583441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:52.845 [2024-07-13 13:48:27.583965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.845 [2024-07-13 13:48:27.584021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.105 [2024-07-13 13:48:27.596501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.105 [2024-07-13 13:48:27.596950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.105 [2024-07-13 13:48:27.596993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.105 [2024-07-13 13:48:27.610112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.105 [2024-07-13 13:48:27.610563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.105 [2024-07-13 13:48:27.610623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.105 [2024-07-13 13:48:27.623779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.105 [2024-07-13 13:48:27.624251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.105 [2024-07-13 13:48:27.624312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.105 [2024-07-13 13:48:27.635644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.105 [2024-07-13 13:48:27.636106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.105 [2024-07-13 13:48:27.636147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.105 [2024-07-13 13:48:27.649622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.105 [2024-07-13 13:48:27.650074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.105 [2024-07-13 13:48:27.650116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.105 [2024-07-13 13:48:27.662456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.105 [2024-07-13 13:48:27.662676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.105 [2024-07-13 13:48:27.662743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.105 [2024-07-13 13:48:27.675551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.105 [2024-07-13 13:48:27.676038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.105 [2024-07-13 13:48:27.676081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.105 [2024-07-13 13:48:27.688494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.105 [2024-07-13 13:48:27.688953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.105 [2024-07-13 13:48:27.688995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.105 [2024-07-13 13:48:27.701286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.105 [2024-07-13 13:48:27.701716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.105 [2024-07-13 13:48:27.701764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.105 [2024-07-13 13:48:27.714002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.105 [2024-07-13 13:48:27.714152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.105 [2024-07-13 13:48:27.714191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.105 [2024-07-13 13:48:27.726738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.105 [2024-07-13 13:48:27.727160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.105 [2024-07-13 13:48:27.727200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.105 [2024-07-13 13:48:27.740553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.105 [2024-07-13 13:48:27.741024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.105 [2024-07-13 13:48:27.741066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.105 [2024-07-13 13:48:27.753918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.105 [2024-07-13 13:48:27.754388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.105 [2024-07-13 13:48:27.754429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.105 [2024-07-13 13:48:27.767095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.105 [2024-07-13 13:48:27.767541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.105 [2024-07-13 13:48:27.767596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.105 [2024-07-13 13:48:27.780005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.105 [2024-07-13 13:48:27.780460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.106 [2024-07-13 13:48:27.780507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.106 [2024-07-13 13:48:27.792633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.106 [2024-07-13 13:48:27.793208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.106 [2024-07-13 13:48:27.793256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.106 [2024-07-13 13:48:27.805001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.106 [2024-07-13 13:48:27.805468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.106 [2024-07-13 13:48:27.805514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.106 [2024-07-13 13:48:27.817501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.106 [2024-07-13 13:48:27.817928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.106 [2024-07-13 13:48:27.817970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.106 [2024-07-13 13:48:27.830306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.106 [2024-07-13 13:48:27.830739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.106 [2024-07-13 13:48:27.830779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.106 [2024-07-13 13:48:27.842823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.106 [2024-07-13 13:48:27.843257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.106 [2024-07-13 13:48:27.843299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.364 [2024-07-13 13:48:27.855703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.364 [2024-07-13 13:48:27.856141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.364 [2024-07-13 13:48:27.856185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.364 [2024-07-13 13:48:27.869259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.364 [2024-07-13 13:48:27.869774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.364 [2024-07-13 13:48:27.869820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.364 [2024-07-13 13:48:27.882531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.364 [2024-07-13 13:48:27.883063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.364 [2024-07-13 13:48:27.883129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.364 [2024-07-13 13:48:27.896307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.364 [2024-07-13 13:48:27.896766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.364 [2024-07-13 13:48:27.896828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.364 [2024-07-13 13:48:27.908496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.364 [2024-07-13 13:48:27.908938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.364 [2024-07-13 13:48:27.908980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.364 [2024-07-13 13:48:27.921218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.365 [2024-07-13 13:48:27.921655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.365 [2024-07-13 13:48:27.921694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.365 [2024-07-13 13:48:27.935295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.365 [2024-07-13 13:48:27.935786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.365 [2024-07-13 13:48:27.935843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.365 [2024-07-13 13:48:27.948221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.365 [2024-07-13 13:48:27.948673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.365 [2024-07-13 13:48:27.948719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.365 [2024-07-13 13:48:27.961520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.365 [2024-07-13 13:48:27.961971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.365 [2024-07-13 13:48:27.962012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.365 [2024-07-13 13:48:27.975006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.365 [2024-07-13 13:48:27.975241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.365 [2024-07-13 13:48:27.975282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.365 [2024-07-13 13:48:27.988886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.365 [2024-07-13 13:48:27.989189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.365 [2024-07-13 13:48:27.989230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.365 [2024-07-13 13:48:28.003022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.365 [2024-07-13 13:48:28.003477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.365 [2024-07-13 13:48:28.003525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.365 [2024-07-13 13:48:28.017529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.365 [2024-07-13 13:48:28.017978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.365 [2024-07-13 13:48:28.018022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.365 [2024-07-13 13:48:28.031129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.365 [2024-07-13 13:48:28.031562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.365 [2024-07-13 13:48:28.031621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.365 [2024-07-13 13:48:28.043702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.365 [2024-07-13 13:48:28.044177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.365 [2024-07-13 13:48:28.044232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.365 [2024-07-13 13:48:28.056613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.365 [2024-07-13 13:48:28.057049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.365 [2024-07-13 13:48:28.057089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.365 [2024-07-13 13:48:28.069272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.365 [2024-07-13 13:48:28.069736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.365 [2024-07-13 13:48:28.069777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.365 [2024-07-13 13:48:28.082957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.365 [2024-07-13 13:48:28.083423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.365 [2024-07-13 13:48:28.083464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.365 [2024-07-13 13:48:28.095206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.365 [2024-07-13 13:48:28.095632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.365 [2024-07-13 13:48:28.095687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.365 [2024-07-13 13:48:28.108792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.365 [2024-07-13 13:48:28.109229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.365 [2024-07-13 13:48:28.109279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.625 [2024-07-13 13:48:28.121397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.625 [2024-07-13 13:48:28.121855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.625 [2024-07-13 13:48:28.121923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.625 [2024-07-13 13:48:28.134741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.625 [2024-07-13 13:48:28.135188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.625 [2024-07-13 13:48:28.135245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.625 [2024-07-13 13:48:28.149039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.625 [2024-07-13 13:48:28.149497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.625 [2024-07-13 13:48:28.149553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.625 [2024-07-13 13:48:28.161934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.625 [2024-07-13 13:48:28.162428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.625 [2024-07-13 13:48:28.162474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.625 [2024-07-13 13:48:28.175140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.625 [2024-07-13 13:48:28.175667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.625 [2024-07-13 13:48:28.175712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.625 [2024-07-13 13:48:28.188490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.625 [2024-07-13 13:48:28.188668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.625 [2024-07-13 13:48:28.188708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.625 [2024-07-13 13:48:28.202478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.625 [2024-07-13 13:48:28.202986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.625 [2024-07-13 13:48:28.203027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.625 [2024-07-13 13:48:28.216441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.625 [2024-07-13 13:48:28.216919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.625 [2024-07-13 13:48:28.216962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.625 [2024-07-13 13:48:28.229333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.625 [2024-07-13 13:48:28.229754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.625 [2024-07-13 13:48:28.229796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.625 [2024-07-13 13:48:28.242061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.625 [2024-07-13 13:48:28.242466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.625 [2024-07-13 13:48:28.242505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.626 [2024-07-13 13:48:28.254784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.626 [2024-07-13 13:48:28.255219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.626 [2024-07-13 13:48:28.255259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.626 [2024-07-13 13:48:28.267425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.626 [2024-07-13 13:48:28.267829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.626 [2024-07-13 13:48:28.267894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.626 [2024-07-13 13:48:28.279717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.626 [2024-07-13 13:48:28.280169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.626 [2024-07-13 13:48:28.280217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.626 [2024-07-13 13:48:28.293518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.626 [2024-07-13 13:48:28.294016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.626 [2024-07-13 13:48:28.294057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.626 [2024-07-13 13:48:28.306811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.626 [2024-07-13 13:48:28.307325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.626 [2024-07-13 13:48:28.307364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.626 [2024-07-13 13:48:28.320452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.626 [2024-07-13 13:48:28.320963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.626 [2024-07-13 13:48:28.321003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.626 [2024-07-13 13:48:28.332528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.626 [2024-07-13 13:48:28.332961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.626 [2024-07-13 13:48:28.333013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.626 [2024-07-13 13:48:28.345244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.626 [2024-07-13 13:48:28.345757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.626 [2024-07-13 13:48:28.345804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.626 [2024-07-13 13:48:28.357113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.626 [2024-07-13 13:48:28.357573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.626 [2024-07-13 13:48:28.357614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.626 [2024-07-13 13:48:28.369873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.370299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.370342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.383089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.383517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.383558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.396523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.397008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.397048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.410437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.410924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.410968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.424025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.424480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.424520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.437839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.438305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.438351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.452839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.453268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.453308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.466664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.466847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.466894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.481069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.481490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.481543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.494531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.494985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.495026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.508864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.509352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.509397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.522327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.522742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.522796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.536328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.536709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.536748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.550484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.550718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.550758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.563593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.564047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.564088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.578384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.578804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.578857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.592378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.592781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.592818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.606275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.606681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.606726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.886 [2024-07-13 13:48:28.619813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:53.886 [2024-07-13 13:48:28.620277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:53.886 [2024-07-13 13:48:28.620314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.633294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.633737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.633778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.647063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.647501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.647561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.660675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.661133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.661173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.673808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.674298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.674344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.687712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.688171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.688227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.701579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.702087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.702123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.714297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.714441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.714477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.728258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.728644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.728682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.742030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.742436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.742474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.755310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.755709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.755748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.768807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.769281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.769327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.782683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.783221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.783266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.797079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.797480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.797519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.811461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.811989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.812041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.825664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.826088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.826128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.840369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.840782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.840821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.855487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.855925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.855963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.868556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.869016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.869055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.147 [2024-07-13 13:48:28.883380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.147 [2024-07-13 13:48:28.883793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.147 [2024-07-13 13:48:28.883831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.406 [2024-07-13 13:48:28.896428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.407 [2024-07-13 13:48:28.896845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.407 [2024-07-13 13:48:28.896910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.407 [2024-07-13 13:48:28.910937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.407 [2024-07-13 13:48:28.911390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.407 [2024-07-13 13:48:28.911450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.407 [2024-07-13 13:48:28.924717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.407 [2024-07-13 13:48:28.925132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.407 [2024-07-13 13:48:28.925194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.407 [2024-07-13 13:48:28.940121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.407 [2024-07-13 13:48:28.940540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.407 [2024-07-13 13:48:28.940577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.407 [2024-07-13 13:48:28.954268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.407 [2024-07-13 13:48:28.954682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.407 [2024-07-13 13:48:28.954738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.407 [2024-07-13 13:48:28.967698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.407 [2024-07-13 13:48:28.968114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.407 [2024-07-13 13:48:28.968154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.407 [2024-07-13 13:48:28.981380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.407 [2024-07-13 13:48:28.981786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.407 [2024-07-13 13:48:28.981825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.407 [2024-07-13 13:48:28.995079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.407 [2024-07-13 13:48:28.995488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.407 [2024-07-13 13:48:28.995526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:54.407 [2024-07-13 13:48:29.008252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.407 [2024-07-13 13:48:29.008562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.407 [2024-07-13 13:48:29.008602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.407 [2024-07-13 13:48:29.022946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.407 [2024-07-13 13:48:29.023363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.407 [2024-07-13 13:48:29.023416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:54.407 [2024-07-13 13:48:29.037075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:54.407 [2024-07-13 13:48:29.037342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.407 [2024-07-13 13:48:29.037381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:54.407 00:36:54.407 Latency(us) 00:36:54.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.407 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:54.407 nvme0n1 : 2.01 2304.35 288.04 0.00 0.00 6922.92 4126.34 16117.00 00:36:54.407 =================================================================================================================== 00:36:54.407 Total : 2304.35 288.04 0.00 0.00 6922.92 4126.34 16117.00 00:36:54.407 0 00:36:54.407 13:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:54.407 13:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:54.407 13:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:54.407 13:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:54.407 | .driver_specific 00:36:54.407 | .nvme_error 00:36:54.407 | .status_code 00:36:54.407 | .command_transient_transport_error' 00:36:54.666 13:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 149 > 0 )) 00:36:54.666 13:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 452293 00:36:54.666 13:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 452293 ']' 00:36:54.666 13:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 452293 00:36:54.666 13:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:54.666 13:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:54.666 13:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 452293 00:36:54.666 13:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:54.666 13:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:54.666 13:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 452293' 00:36:54.666 killing process with pid 452293 00:36:54.666 13:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 452293 00:36:54.666 Received shutdown signal, test time was about 2.000000 seconds 00:36:54.666 00:36:54.666 Latency(us) 00:36:54.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.666 =================================================================================================================== 00:36:54.666 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:54.666 13:48:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 452293 00:36:56.040 13:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 450120 00:36:56.040 13:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 450120 ']' 00:36:56.040 13:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 450120 00:36:56.040 13:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:56.040 13:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:56.040 13:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 450120 00:36:56.040 13:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:56.040 13:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:56.040 13:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 450120' 00:36:56.040 killing process with pid 450120 00:36:56.040 13:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 450120 00:36:56.040 13:48:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 450120 00:36:57.421 00:36:57.421 real 0m23.566s 00:36:57.421 user 0m45.710s 00:36:57.421 sys 0m4.436s 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:57.421 ************************************ 00:36:57.421 END TEST nvmf_digest_error 00:36:57.421 ************************************ 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:57.421 rmmod nvme_tcp 00:36:57.421 rmmod nvme_fabrics 00:36:57.421 rmmod nvme_keyring 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 450120 ']' 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 450120 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 450120 ']' 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 450120 00:36:57.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (450120) - No such process 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 450120 is not found' 00:36:57.421 Process with pid 450120 is not found 00:36:57.421 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:57.422 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:57.422 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:57.422 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:57.422 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:57.422 13:48:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:57.422 13:48:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:57.422 13:48:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:59.322 13:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:59.322 00:36:59.322 real 0m52.715s 00:36:59.322 user 1m34.453s 00:36:59.322 sys 0m10.547s 00:36:59.322 13:48:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:59.322 13:48:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:59.322 ************************************ 00:36:59.322 END TEST nvmf_digest 00:36:59.322 ************************************ 00:36:59.322 13:48:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:59.322 13:48:33 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:36:59.322 13:48:33 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:36:59.322 13:48:33 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:36:59.322 13:48:33 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:59.322 13:48:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:59.322 13:48:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:59.322 13:48:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:59.322 ************************************ 00:36:59.322 START TEST nvmf_bdevperf 00:36:59.322 ************************************ 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:59.322 * Looking for test storage... 00:36:59.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:59.322 13:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:59.322 13:48:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:59.322 13:48:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:59.322 13:48:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:59.322 13:48:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:59.322 13:48:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:59.322 13:48:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:59.322 13:48:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:59.322 13:48:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:59.322 13:48:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:59.322 13:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:59.322 13:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:59.322 13:48:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:59.322 13:48:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:59.322 13:48:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:36:59.322 13:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:01.222 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.222 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:01.223 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:01.223 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:01.223 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:01.223 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:01.486 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:01.486 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:01.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:01.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:37:01.486 00:37:01.486 --- 10.0.0.2 ping statistics --- 00:37:01.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.486 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:37:01.486 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:01.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:01.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:37:01.486 00:37:01.486 --- 10.0.0.1 ping statistics --- 00:37:01.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.486 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:37:01.486 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:01.486 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:37:01.486 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:01.486 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:01.486 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:01.486 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:01.486 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:01.486 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:01.486 13:48:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:01.486 13:48:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:01.486 13:48:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:01.486 13:48:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:01.486 13:48:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:01.486 13:48:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.486 13:48:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=454917 00:37:01.486 13:48:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:01.486 13:48:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 454917 00:37:01.486 13:48:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 454917 ']' 00:37:01.486 13:48:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:01.486 13:48:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:01.486 13:48:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:01.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:01.486 13:48:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:01.486 13:48:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:01.486 [2024-07-13 13:48:36.109637] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:01.486 [2024-07-13 13:48:36.109794] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:01.486 EAL: No free 2048 kB hugepages reported on node 1 00:37:01.785 [2024-07-13 13:48:36.252590] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:02.044 [2024-07-13 13:48:36.516661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:02.044 [2024-07-13 13:48:36.516734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:02.044 [2024-07-13 13:48:36.516779] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:02.044 [2024-07-13 13:48:36.516800] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:02.044 [2024-07-13 13:48:36.516821] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:02.044 [2024-07-13 13:48:36.516935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:02.044 [2024-07-13 13:48:36.516985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:02.044 [2024-07-13 13:48:36.516996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:37:02.302 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:02.302 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:37:02.302 13:48:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:02.302 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:02.302 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.302 13:48:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:02.302 13:48:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:02.302 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.302 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.302 [2024-07-13 13:48:37.025808] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:02.302 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.302 13:48:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:02.302 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.302 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.560 Malloc0 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.560 [2024-07-13 13:48:37.132354] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:02.560 { 00:37:02.560 "params": { 00:37:02.560 "name": "Nvme$subsystem", 00:37:02.560 "trtype": "$TEST_TRANSPORT", 00:37:02.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:02.560 "adrfam": "ipv4", 00:37:02.560 "trsvcid": "$NVMF_PORT", 00:37:02.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:02.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:02.560 "hdgst": ${hdgst:-false}, 00:37:02.560 "ddgst": ${ddgst:-false} 00:37:02.560 }, 00:37:02.560 "method": "bdev_nvme_attach_controller" 00:37:02.560 } 00:37:02.560 EOF 00:37:02.560 )") 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:37:02.560 13:48:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:02.560 "params": { 00:37:02.560 "name": "Nvme1", 00:37:02.560 "trtype": "tcp", 00:37:02.560 "traddr": "10.0.0.2", 00:37:02.560 "adrfam": "ipv4", 00:37:02.560 "trsvcid": "4420", 00:37:02.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:02.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:02.560 "hdgst": false, 00:37:02.560 "ddgst": false 00:37:02.560 }, 00:37:02.560 "method": "bdev_nvme_attach_controller" 00:37:02.560 }' 00:37:02.560 [2024-07-13 13:48:37.211221] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:02.560 [2024-07-13 13:48:37.211367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455073 ] 00:37:02.560 EAL: No free 2048 kB hugepages reported on node 1 00:37:02.818 [2024-07-13 13:48:37.339259] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.076 [2024-07-13 13:48:37.572507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:03.334 Running I/O for 1 seconds... 00:37:04.707 00:37:04.707 Latency(us) 00:37:04.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:04.707 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:04.707 Verification LBA range: start 0x0 length 0x4000 00:37:04.707 Nvme1n1 : 1.02 6084.13 23.77 0.00 0.00 20945.87 4199.16 17282.09 00:37:04.707 =================================================================================================================== 00:37:04.707 Total : 6084.13 23.77 0.00 0.00 20945.87 4199.16 17282.09 00:37:05.272 13:48:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=455393 00:37:05.272 13:48:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:05.272 13:48:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:05.272 13:48:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:05.272 13:48:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:37:05.272 13:48:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:37:05.272 13:48:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:05.272 13:48:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:05.272 { 00:37:05.272 "params": { 00:37:05.272 "name": "Nvme$subsystem", 00:37:05.272 "trtype": "$TEST_TRANSPORT", 00:37:05.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:05.272 "adrfam": "ipv4", 00:37:05.272 "trsvcid": "$NVMF_PORT", 00:37:05.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:05.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:05.272 "hdgst": ${hdgst:-false}, 00:37:05.272 "ddgst": ${ddgst:-false} 00:37:05.272 }, 00:37:05.272 "method": "bdev_nvme_attach_controller" 00:37:05.272 } 00:37:05.272 EOF 00:37:05.272 )") 00:37:05.272 13:48:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:37:05.272 13:48:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:37:05.272 13:48:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:37:05.272 13:48:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:05.272 "params": { 00:37:05.272 "name": "Nvme1", 00:37:05.272 "trtype": "tcp", 00:37:05.272 "traddr": "10.0.0.2", 00:37:05.272 "adrfam": "ipv4", 00:37:05.272 "trsvcid": "4420", 00:37:05.272 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:05.272 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:05.272 "hdgst": false, 00:37:05.272 "ddgst": false 00:37:05.272 }, 00:37:05.272 "method": "bdev_nvme_attach_controller" 00:37:05.272 }' 00:37:05.530 [2024-07-13 13:48:40.064202] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:05.530 [2024-07-13 13:48:40.064370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455393 ] 00:37:05.530 EAL: No free 2048 kB hugepages reported on node 1 00:37:05.530 [2024-07-13 13:48:40.188770] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:05.788 [2024-07-13 13:48:40.425049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:06.354 Running I/O for 15 seconds... 00:37:08.254 13:48:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 454917 00:37:08.254 13:48:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:08.514 [2024-07-13 13:48:43.003398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.003470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.003530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.003558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.003587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.003613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.003642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.003667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.003694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.003729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.003761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.003788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.003817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.003843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.003879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.003924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.003950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.003972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.003995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.004016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.004039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.004060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.004083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.004105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.004145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.004187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.004213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.004238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.004264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.004288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.004314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.004339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.004365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.004389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.004415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.004444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.004471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.004495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.004521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.004545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.004571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.004596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.004623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.004647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.004672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.004696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.004721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.514 [2024-07-13 13:48:43.004744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.514 [2024-07-13 13:48:43.004770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.004794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.004821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.004844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.004882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.004933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.004960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.004982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.005190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.005241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.005968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.005992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.006014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.006058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.006103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.006147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.006202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.006263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.006312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.006361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.515 [2024-07-13 13:48:43.006416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.006466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.006514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.006563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.006611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.006660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.006709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.006758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.006808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.006857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.006933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.006957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.006978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.007000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.007025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.007048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.007070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.007091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.007112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.515 [2024-07-13 13:48:43.007134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.515 [2024-07-13 13:48:43.007170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.007191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.007210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.007250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.007274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.007299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.007323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.007348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.007372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.007396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.007420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.007463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.007487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.007513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.007536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.007561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.007584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.007611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.007635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.007665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.007689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.007715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.007739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.007764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.007787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.007812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.007836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.007862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.007895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.007935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.007957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.007979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.008000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.008043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.008085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.008129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.008196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.008245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.008295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.008354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.008403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:08.516 [2024-07-13 13:48:43.008453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.008502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.008552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.008601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.008649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.008698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.008747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.008796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.008846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.008920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.008968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.008991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.009012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.009034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.009054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.009077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.009097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.009119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.009139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.009180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.009204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.009231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.009255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.009280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.516 [2024-07-13 13:48:43.009304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.516 [2024-07-13 13:48:43.009330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.517 [2024-07-13 13:48:43.009354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.009381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.517 [2024-07-13 13:48:43.009405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.009431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.517 [2024-07-13 13:48:43.009455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.009480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.517 [2024-07-13 13:48:43.009504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.009530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.517 [2024-07-13 13:48:43.009553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.009584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.517 [2024-07-13 13:48:43.009609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.009635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.517 [2024-07-13 13:48:43.009659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.009684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.517 [2024-07-13 13:48:43.009707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.009733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.517 [2024-07-13 13:48:43.009758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.009783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.517 [2024-07-13 13:48:43.009807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.009832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.517 [2024-07-13 13:48:43.009856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.009892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.517 [2024-07-13 13:48:43.009931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.009953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.517 [2024-07-13 13:48:43.009973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.009995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.517 [2024-07-13 13:48:43.010016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.010036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:37:08.517 [2024-07-13 13:48:43.010061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:08.517 [2024-07-13 13:48:43.010079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:08.517 [2024-07-13 13:48:43.010098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102632 len:8 PRP1 0x0 PRP2 0x0 00:37:08.517 [2024-07-13 13:48:43.010116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.010431] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2c80 was disconnected and freed. reset controller. 00:37:08.517 [2024-07-13 13:48:43.010541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:08.517 [2024-07-13 13:48:43.010579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.010610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:08.517 [2024-07-13 13:48:43.010634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.010656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:08.517 [2024-07-13 13:48:43.010678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.010699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:08.517 [2024-07-13 13:48:43.010721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:08.517 [2024-07-13 13:48:43.010741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.517 [2024-07-13 13:48:43.014823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.517 [2024-07-13 13:48:43.014916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.517 [2024-07-13 13:48:43.015826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.517 [2024-07-13 13:48:43.015910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.517 [2024-07-13 13:48:43.015939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.517 [2024-07-13 13:48:43.016238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.517 [2024-07-13 13:48:43.016531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.517 [2024-07-13 13:48:43.016564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.517 [2024-07-13 13:48:43.016589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.517 [2024-07-13 13:48:43.020760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.517 [2024-07-13 13:48:43.029629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.517 [2024-07-13 13:48:43.030142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.517 [2024-07-13 13:48:43.030185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.517 [2024-07-13 13:48:43.030212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.517 [2024-07-13 13:48:43.030501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.517 [2024-07-13 13:48:43.030791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.517 [2024-07-13 13:48:43.030823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.517 [2024-07-13 13:48:43.030845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.517 [2024-07-13 13:48:43.035023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.517 [2024-07-13 13:48:43.044121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.517 [2024-07-13 13:48:43.044658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.517 [2024-07-13 13:48:43.044700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.517 [2024-07-13 13:48:43.044732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.517 [2024-07-13 13:48:43.045033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.517 [2024-07-13 13:48:43.045324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.517 [2024-07-13 13:48:43.045357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.517 [2024-07-13 13:48:43.045379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.517 [2024-07-13 13:48:43.049570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.517 [2024-07-13 13:48:43.058694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.517 [2024-07-13 13:48:43.059195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.517 [2024-07-13 13:48:43.059246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.517 [2024-07-13 13:48:43.059273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.517 [2024-07-13 13:48:43.059561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.517 [2024-07-13 13:48:43.059851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.517 [2024-07-13 13:48:43.059893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.517 [2024-07-13 13:48:43.059917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.517 [2024-07-13 13:48:43.064093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.517 [2024-07-13 13:48:43.073237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.517 [2024-07-13 13:48:43.073768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.517 [2024-07-13 13:48:43.073817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.517 [2024-07-13 13:48:43.073843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.517 [2024-07-13 13:48:43.074153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.517 [2024-07-13 13:48:43.074444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.517 [2024-07-13 13:48:43.074477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.517 [2024-07-13 13:48:43.074499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.517 [2024-07-13 13:48:43.078654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.517 [2024-07-13 13:48:43.087743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.517 [2024-07-13 13:48:43.088306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.517 [2024-07-13 13:48:43.088356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.517 [2024-07-13 13:48:43.088383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.518 [2024-07-13 13:48:43.088669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.518 [2024-07-13 13:48:43.088975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.518 [2024-07-13 13:48:43.089008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.518 [2024-07-13 13:48:43.089030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.518 [2024-07-13 13:48:43.093179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.518 [2024-07-13 13:48:43.102232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.518 [2024-07-13 13:48:43.102759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.518 [2024-07-13 13:48:43.102808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.518 [2024-07-13 13:48:43.102834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.518 [2024-07-13 13:48:43.103130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.518 [2024-07-13 13:48:43.103429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.518 [2024-07-13 13:48:43.103462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.518 [2024-07-13 13:48:43.103484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.518 [2024-07-13 13:48:43.107629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.518 [2024-07-13 13:48:43.116686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.518 [2024-07-13 13:48:43.117190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.518 [2024-07-13 13:48:43.117242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.518 [2024-07-13 13:48:43.117269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.518 [2024-07-13 13:48:43.117554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.518 [2024-07-13 13:48:43.117854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.518 [2024-07-13 13:48:43.117896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.518 [2024-07-13 13:48:43.117919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.518 [2024-07-13 13:48:43.122069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.518 [2024-07-13 13:48:43.131140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.518 [2024-07-13 13:48:43.131666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.518 [2024-07-13 13:48:43.131707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.518 [2024-07-13 13:48:43.131733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.518 [2024-07-13 13:48:43.132033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.518 [2024-07-13 13:48:43.132320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.518 [2024-07-13 13:48:43.132352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.518 [2024-07-13 13:48:43.132374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.518 [2024-07-13 13:48:43.136550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.518 [2024-07-13 13:48:43.145621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.518 [2024-07-13 13:48:43.146124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.518 [2024-07-13 13:48:43.146174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.518 [2024-07-13 13:48:43.146200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.518 [2024-07-13 13:48:43.146484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.518 [2024-07-13 13:48:43.146772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.518 [2024-07-13 13:48:43.146804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.518 [2024-07-13 13:48:43.146826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.518 [2024-07-13 13:48:43.150962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.518 [2024-07-13 13:48:43.160294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.518 [2024-07-13 13:48:43.160825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.518 [2024-07-13 13:48:43.160879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.518 [2024-07-13 13:48:43.160915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.518 [2024-07-13 13:48:43.161200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.518 [2024-07-13 13:48:43.161497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.518 [2024-07-13 13:48:43.161529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.518 [2024-07-13 13:48:43.161552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.518 [2024-07-13 13:48:43.165707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.518 [2024-07-13 13:48:43.174739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.518 [2024-07-13 13:48:43.175264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.518 [2024-07-13 13:48:43.175314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.518 [2024-07-13 13:48:43.175340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.518 [2024-07-13 13:48:43.175626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.518 [2024-07-13 13:48:43.175929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.518 [2024-07-13 13:48:43.175972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.518 [2024-07-13 13:48:43.175994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.518 [2024-07-13 13:48:43.180106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.518 [2024-07-13 13:48:43.189362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.518 [2024-07-13 13:48:43.189851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.518 [2024-07-13 13:48:43.189913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.518 [2024-07-13 13:48:43.189941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.518 [2024-07-13 13:48:43.190227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.518 [2024-07-13 13:48:43.190515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.518 [2024-07-13 13:48:43.190547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.518 [2024-07-13 13:48:43.190569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.518 [2024-07-13 13:48:43.194699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.518 [2024-07-13 13:48:43.203950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.518 [2024-07-13 13:48:43.204446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.519 [2024-07-13 13:48:43.204495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.519 [2024-07-13 13:48:43.204521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.519 [2024-07-13 13:48:43.204808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.519 [2024-07-13 13:48:43.205108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.519 [2024-07-13 13:48:43.205140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.519 [2024-07-13 13:48:43.205163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.519 [2024-07-13 13:48:43.209276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.519 [2024-07-13 13:48:43.218501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.519 [2024-07-13 13:48:43.219029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.519 [2024-07-13 13:48:43.219078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.519 [2024-07-13 13:48:43.219104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.519 [2024-07-13 13:48:43.219390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.519 [2024-07-13 13:48:43.219679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.519 [2024-07-13 13:48:43.219710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.519 [2024-07-13 13:48:43.219732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.519 [2024-07-13 13:48:43.223856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.519 [2024-07-13 13:48:43.233090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.519 [2024-07-13 13:48:43.233596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.519 [2024-07-13 13:48:43.233646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.519 [2024-07-13 13:48:43.233686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.519 [2024-07-13 13:48:43.233987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.519 [2024-07-13 13:48:43.234289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.519 [2024-07-13 13:48:43.234322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.519 [2024-07-13 13:48:43.234345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.519 [2024-07-13 13:48:43.238468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.519 [2024-07-13 13:48:43.247702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.519 [2024-07-13 13:48:43.248236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.519 [2024-07-13 13:48:43.248287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.519 [2024-07-13 13:48:43.248313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.519 [2024-07-13 13:48:43.248597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.519 [2024-07-13 13:48:43.248897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.519 [2024-07-13 13:48:43.248930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.519 [2024-07-13 13:48:43.248952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.519 [2024-07-13 13:48:43.253076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.777 [2024-07-13 13:48:43.262364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.777 [2024-07-13 13:48:43.262891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.777 [2024-07-13 13:48:43.262942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.777 [2024-07-13 13:48:43.262968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.777 [2024-07-13 13:48:43.263254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.777 [2024-07-13 13:48:43.263543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.777 [2024-07-13 13:48:43.263575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.777 [2024-07-13 13:48:43.263597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.777 [2024-07-13 13:48:43.267826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.777 [2024-07-13 13:48:43.276856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.777 [2024-07-13 13:48:43.277366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.777 [2024-07-13 13:48:43.277415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.777 [2024-07-13 13:48:43.277442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.777 [2024-07-13 13:48:43.277728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.777 [2024-07-13 13:48:43.278028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.777 [2024-07-13 13:48:43.278060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.777 [2024-07-13 13:48:43.278081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.777 [2024-07-13 13:48:43.282221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.777 [2024-07-13 13:48:43.291459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.777 [2024-07-13 13:48:43.291977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.777 [2024-07-13 13:48:43.292025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.777 [2024-07-13 13:48:43.292051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.777 [2024-07-13 13:48:43.292336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.777 [2024-07-13 13:48:43.292624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.777 [2024-07-13 13:48:43.292656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.777 [2024-07-13 13:48:43.292679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.777 [2024-07-13 13:48:43.296801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.777 [2024-07-13 13:48:43.306045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.777 [2024-07-13 13:48:43.306664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.777 [2024-07-13 13:48:43.306731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.777 [2024-07-13 13:48:43.306757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.777 [2024-07-13 13:48:43.307054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.777 [2024-07-13 13:48:43.307343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.777 [2024-07-13 13:48:43.307374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.777 [2024-07-13 13:48:43.307398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.777 [2024-07-13 13:48:43.311514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.777 [2024-07-13 13:48:43.320502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.777 [2024-07-13 13:48:43.321073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.777 [2024-07-13 13:48:43.321121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.777 [2024-07-13 13:48:43.321148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.777 [2024-07-13 13:48:43.321432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.777 [2024-07-13 13:48:43.321720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.777 [2024-07-13 13:48:43.321751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.777 [2024-07-13 13:48:43.321774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.777 [2024-07-13 13:48:43.325900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.777 [2024-07-13 13:48:43.335130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.777 [2024-07-13 13:48:43.335698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.777 [2024-07-13 13:48:43.335752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.777 [2024-07-13 13:48:43.335779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.777 [2024-07-13 13:48:43.336073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.777 [2024-07-13 13:48:43.336361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.777 [2024-07-13 13:48:43.336394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.777 [2024-07-13 13:48:43.336416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.777 [2024-07-13 13:48:43.340534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.777 [2024-07-13 13:48:43.349768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.777 [2024-07-13 13:48:43.350281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.777 [2024-07-13 13:48:43.350330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.777 [2024-07-13 13:48:43.350357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.777 [2024-07-13 13:48:43.350641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.777 [2024-07-13 13:48:43.350944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.777 [2024-07-13 13:48:43.350977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.777 [2024-07-13 13:48:43.350998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.777 [2024-07-13 13:48:43.355131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.777 [2024-07-13 13:48:43.364376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.777 [2024-07-13 13:48:43.364894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.777 [2024-07-13 13:48:43.364936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.777 [2024-07-13 13:48:43.364962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.777 [2024-07-13 13:48:43.365246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.777 [2024-07-13 13:48:43.365534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.777 [2024-07-13 13:48:43.365566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.777 [2024-07-13 13:48:43.365589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.777 [2024-07-13 13:48:43.369714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.777 [2024-07-13 13:48:43.378973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.777 [2024-07-13 13:48:43.379436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.777 [2024-07-13 13:48:43.379483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.777 [2024-07-13 13:48:43.379508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.777 [2024-07-13 13:48:43.379794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.777 [2024-07-13 13:48:43.380095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.777 [2024-07-13 13:48:43.380127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.778 [2024-07-13 13:48:43.380161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.778 [2024-07-13 13:48:43.384292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.778 [2024-07-13 13:48:43.393547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.778 [2024-07-13 13:48:43.394018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.778 [2024-07-13 13:48:43.394069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.778 [2024-07-13 13:48:43.394096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.778 [2024-07-13 13:48:43.394381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.778 [2024-07-13 13:48:43.394670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.778 [2024-07-13 13:48:43.394702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.778 [2024-07-13 13:48:43.394724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.778 [2024-07-13 13:48:43.398863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.778 [2024-07-13 13:48:43.408184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.778 [2024-07-13 13:48:43.408676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.778 [2024-07-13 13:48:43.408725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.778 [2024-07-13 13:48:43.408751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.778 [2024-07-13 13:48:43.409049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.778 [2024-07-13 13:48:43.409338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.778 [2024-07-13 13:48:43.409369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.778 [2024-07-13 13:48:43.409391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.778 [2024-07-13 13:48:43.413534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.778 [2024-07-13 13:48:43.422832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.778 [2024-07-13 13:48:43.423361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.778 [2024-07-13 13:48:43.423412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.778 [2024-07-13 13:48:43.423438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.778 [2024-07-13 13:48:43.423723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.778 [2024-07-13 13:48:43.424024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.778 [2024-07-13 13:48:43.424057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.778 [2024-07-13 13:48:43.424087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.778 [2024-07-13 13:48:43.428237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.778 [2024-07-13 13:48:43.437303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.778 [2024-07-13 13:48:43.437836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.778 [2024-07-13 13:48:43.437896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.778 [2024-07-13 13:48:43.437923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.778 [2024-07-13 13:48:43.438211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.778 [2024-07-13 13:48:43.438514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.778 [2024-07-13 13:48:43.438546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.778 [2024-07-13 13:48:43.438570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.778 [2024-07-13 13:48:43.442707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.778 [2024-07-13 13:48:43.451781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.778 [2024-07-13 13:48:43.452317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.778 [2024-07-13 13:48:43.452365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.778 [2024-07-13 13:48:43.452392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.778 [2024-07-13 13:48:43.452679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.778 [2024-07-13 13:48:43.452980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.778 [2024-07-13 13:48:43.453013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.778 [2024-07-13 13:48:43.453037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.778 [2024-07-13 13:48:43.457188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.778 [2024-07-13 13:48:43.466231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.778 [2024-07-13 13:48:43.466809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.778 [2024-07-13 13:48:43.466882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.778 [2024-07-13 13:48:43.466912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.778 [2024-07-13 13:48:43.467200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.778 [2024-07-13 13:48:43.467487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.778 [2024-07-13 13:48:43.467518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.778 [2024-07-13 13:48:43.467541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.778 [2024-07-13 13:48:43.471691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.778 [2024-07-13 13:48:43.480750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.778 [2024-07-13 13:48:43.481292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.778 [2024-07-13 13:48:43.481347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.778 [2024-07-13 13:48:43.481374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.778 [2024-07-13 13:48:43.481660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.778 [2024-07-13 13:48:43.481965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.778 [2024-07-13 13:48:43.481997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.778 [2024-07-13 13:48:43.482025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.778 [2024-07-13 13:48:43.486223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.778 [2024-07-13 13:48:43.495307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.778 [2024-07-13 13:48:43.495839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.778 [2024-07-13 13:48:43.495901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.778 [2024-07-13 13:48:43.495937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.778 [2024-07-13 13:48:43.496225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.778 [2024-07-13 13:48:43.496515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.778 [2024-07-13 13:48:43.496546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.778 [2024-07-13 13:48:43.496569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.778 [2024-07-13 13:48:43.500725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:08.778 [2024-07-13 13:48:43.509808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:08.778 [2024-07-13 13:48:43.510334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:08.778 [2024-07-13 13:48:43.510385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:08.778 [2024-07-13 13:48:43.510412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:08.778 [2024-07-13 13:48:43.510699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:08.778 [2024-07-13 13:48:43.511002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:08.778 [2024-07-13 13:48:43.511034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:08.778 [2024-07-13 13:48:43.511057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:08.778 [2024-07-13 13:48:43.515184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.037 [2024-07-13 13:48:43.524509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.037 [2024-07-13 13:48:43.525070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.037 [2024-07-13 13:48:43.525123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.037 [2024-07-13 13:48:43.525150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.037 [2024-07-13 13:48:43.525504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.037 [2024-07-13 13:48:43.525806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.037 [2024-07-13 13:48:43.525838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.037 [2024-07-13 13:48:43.525877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.037 [2024-07-13 13:48:43.530064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.037 [2024-07-13 13:48:43.539107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.037 [2024-07-13 13:48:43.539614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.037 [2024-07-13 13:48:43.539665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.037 [2024-07-13 13:48:43.539691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.037 [2024-07-13 13:48:43.539990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.037 [2024-07-13 13:48:43.540281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.037 [2024-07-13 13:48:43.540313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.037 [2024-07-13 13:48:43.540336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.037 [2024-07-13 13:48:43.544481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.037 [2024-07-13 13:48:43.553765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.037 [2024-07-13 13:48:43.554296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.037 [2024-07-13 13:48:43.554345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.037 [2024-07-13 13:48:43.554371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.037 [2024-07-13 13:48:43.554656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.037 [2024-07-13 13:48:43.554962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.037 [2024-07-13 13:48:43.555005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.037 [2024-07-13 13:48:43.555027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.037 [2024-07-13 13:48:43.559172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.037 [2024-07-13 13:48:43.568462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.037 [2024-07-13 13:48:43.568983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.037 [2024-07-13 13:48:43.569032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.037 [2024-07-13 13:48:43.569058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.037 [2024-07-13 13:48:43.569346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.037 [2024-07-13 13:48:43.569636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.037 [2024-07-13 13:48:43.569668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.037 [2024-07-13 13:48:43.569698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.037 [2024-07-13 13:48:43.573845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.037 [2024-07-13 13:48:43.582907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.037 [2024-07-13 13:48:43.583430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.037 [2024-07-13 13:48:43.583478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.037 [2024-07-13 13:48:43.583504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.037 [2024-07-13 13:48:43.583789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.037 [2024-07-13 13:48:43.584090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.037 [2024-07-13 13:48:43.584122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.037 [2024-07-13 13:48:43.584151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.037 [2024-07-13 13:48:43.588296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.037 [2024-07-13 13:48:43.597571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.037 [2024-07-13 13:48:43.598115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.037 [2024-07-13 13:48:43.598164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.037 [2024-07-13 13:48:43.598190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.037 [2024-07-13 13:48:43.598475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.037 [2024-07-13 13:48:43.598763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.037 [2024-07-13 13:48:43.598795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.037 [2024-07-13 13:48:43.598817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.037 [2024-07-13 13:48:43.602960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.037 [2024-07-13 13:48:43.612236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.037 [2024-07-13 13:48:43.612724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.037 [2024-07-13 13:48:43.612773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.037 [2024-07-13 13:48:43.612800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.037 [2024-07-13 13:48:43.613097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.037 [2024-07-13 13:48:43.613386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.037 [2024-07-13 13:48:43.613418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.037 [2024-07-13 13:48:43.613440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.037 [2024-07-13 13:48:43.617576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.037 [2024-07-13 13:48:43.626855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.037 [2024-07-13 13:48:43.627396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.037 [2024-07-13 13:48:43.627445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.037 [2024-07-13 13:48:43.627472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.038 [2024-07-13 13:48:43.627758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.038 [2024-07-13 13:48:43.628059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.038 [2024-07-13 13:48:43.628091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.038 [2024-07-13 13:48:43.628119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.038 [2024-07-13 13:48:43.632248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.038 [2024-07-13 13:48:43.641512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.038 [2024-07-13 13:48:43.642026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.038 [2024-07-13 13:48:43.642074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.038 [2024-07-13 13:48:43.642100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.038 [2024-07-13 13:48:43.642388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.038 [2024-07-13 13:48:43.642679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.038 [2024-07-13 13:48:43.642710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.038 [2024-07-13 13:48:43.642747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.038 [2024-07-13 13:48:43.646898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.038 [2024-07-13 13:48:43.656139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.038 [2024-07-13 13:48:43.656647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.038 [2024-07-13 13:48:43.656694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.038 [2024-07-13 13:48:43.656719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.038 [2024-07-13 13:48:43.657015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.038 [2024-07-13 13:48:43.657302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.038 [2024-07-13 13:48:43.657334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.038 [2024-07-13 13:48:43.657356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.038 [2024-07-13 13:48:43.661494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.038 [2024-07-13 13:48:43.670725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.038 [2024-07-13 13:48:43.671257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.038 [2024-07-13 13:48:43.671308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.038 [2024-07-13 13:48:43.671333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.038 [2024-07-13 13:48:43.671622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.038 [2024-07-13 13:48:43.671924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.038 [2024-07-13 13:48:43.671956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.038 [2024-07-13 13:48:43.671979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.038 [2024-07-13 13:48:43.676190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.038 [2024-07-13 13:48:43.685193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.038 [2024-07-13 13:48:43.685691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.038 [2024-07-13 13:48:43.685741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.038 [2024-07-13 13:48:43.685767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.038 [2024-07-13 13:48:43.686064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.038 [2024-07-13 13:48:43.686353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.038 [2024-07-13 13:48:43.686385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.038 [2024-07-13 13:48:43.686407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.038 [2024-07-13 13:48:43.690533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.038 [2024-07-13 13:48:43.699792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.038 [2024-07-13 13:48:43.700325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.038 [2024-07-13 13:48:43.700377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.038 [2024-07-13 13:48:43.700403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.038 [2024-07-13 13:48:43.700689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.038 [2024-07-13 13:48:43.700991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.038 [2024-07-13 13:48:43.701022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.038 [2024-07-13 13:48:43.701047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.038 [2024-07-13 13:48:43.705171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.038 [2024-07-13 13:48:43.714421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.038 [2024-07-13 13:48:43.714932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.038 [2024-07-13 13:48:43.714984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.038 [2024-07-13 13:48:43.715010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.038 [2024-07-13 13:48:43.715296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.038 [2024-07-13 13:48:43.715585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.038 [2024-07-13 13:48:43.715617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.038 [2024-07-13 13:48:43.715645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.038 [2024-07-13 13:48:43.719763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.038 [2024-07-13 13:48:43.729014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.038 [2024-07-13 13:48:43.729511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.038 [2024-07-13 13:48:43.729559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.038 [2024-07-13 13:48:43.729585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.038 [2024-07-13 13:48:43.729879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.038 [2024-07-13 13:48:43.730175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.038 [2024-07-13 13:48:43.730213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.038 [2024-07-13 13:48:43.730235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.038 [2024-07-13 13:48:43.734352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.038 [2024-07-13 13:48:43.743575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.038 [2024-07-13 13:48:43.744107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.038 [2024-07-13 13:48:43.744158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.038 [2024-07-13 13:48:43.744184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.039 [2024-07-13 13:48:43.744469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.039 [2024-07-13 13:48:43.744757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.039 [2024-07-13 13:48:43.744789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.039 [2024-07-13 13:48:43.744811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.039 [2024-07-13 13:48:43.748930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.039 [2024-07-13 13:48:43.758156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.039 [2024-07-13 13:48:43.758649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.039 [2024-07-13 13:48:43.758697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.039 [2024-07-13 13:48:43.758722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.039 [2024-07-13 13:48:43.759019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.039 [2024-07-13 13:48:43.759306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.039 [2024-07-13 13:48:43.759339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.039 [2024-07-13 13:48:43.759361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.039 [2024-07-13 13:48:43.763471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.039 [2024-07-13 13:48:43.772712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.039 [2024-07-13 13:48:43.773223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.039 [2024-07-13 13:48:43.773270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.039 [2024-07-13 13:48:43.773296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.039 [2024-07-13 13:48:43.773591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.039 [2024-07-13 13:48:43.773891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.039 [2024-07-13 13:48:43.773924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.039 [2024-07-13 13:48:43.773956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.039 [2024-07-13 13:48:43.778146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.298 [2024-07-13 13:48:43.787461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.298 [2024-07-13 13:48:43.787950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.298 [2024-07-13 13:48:43.788003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.298 [2024-07-13 13:48:43.788030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.298 [2024-07-13 13:48:43.788315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.298 [2024-07-13 13:48:43.788602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.298 [2024-07-13 13:48:43.788633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.298 [2024-07-13 13:48:43.788656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.298 [2024-07-13 13:48:43.792774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.298 [2024-07-13 13:48:43.802017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.298 [2024-07-13 13:48:43.802528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.298 [2024-07-13 13:48:43.802576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.298 [2024-07-13 13:48:43.802602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.298 [2024-07-13 13:48:43.802899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.298 [2024-07-13 13:48:43.803193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.298 [2024-07-13 13:48:43.803226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.298 [2024-07-13 13:48:43.803248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.298 [2024-07-13 13:48:43.807362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.298 [2024-07-13 13:48:43.816607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.298 [2024-07-13 13:48:43.817124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.298 [2024-07-13 13:48:43.817176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.298 [2024-07-13 13:48:43.817202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.298 [2024-07-13 13:48:43.817493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.298 [2024-07-13 13:48:43.817781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.298 [2024-07-13 13:48:43.817814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.298 [2024-07-13 13:48:43.817836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.298 [2024-07-13 13:48:43.821963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.298 [2024-07-13 13:48:43.831190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.298 [2024-07-13 13:48:43.831716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.298 [2024-07-13 13:48:43.831762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.298 [2024-07-13 13:48:43.831788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.298 [2024-07-13 13:48:43.832084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.298 [2024-07-13 13:48:43.832371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.298 [2024-07-13 13:48:43.832403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.298 [2024-07-13 13:48:43.832426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.298 [2024-07-13 13:48:43.836542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.298 [2024-07-13 13:48:43.845761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.298 [2024-07-13 13:48:43.846276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.298 [2024-07-13 13:48:43.846326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.298 [2024-07-13 13:48:43.846351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.298 [2024-07-13 13:48:43.846636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.298 [2024-07-13 13:48:43.846935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.298 [2024-07-13 13:48:43.846967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.298 [2024-07-13 13:48:43.846989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.298 [2024-07-13 13:48:43.851105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.298 [2024-07-13 13:48:43.860342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.298 [2024-07-13 13:48:43.860874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.298 [2024-07-13 13:48:43.860921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.298 [2024-07-13 13:48:43.860948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.298 [2024-07-13 13:48:43.861233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.298 [2024-07-13 13:48:43.861522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.298 [2024-07-13 13:48:43.861554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.298 [2024-07-13 13:48:43.861581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.298 [2024-07-13 13:48:43.865699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.299 [2024-07-13 13:48:43.874913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.299 [2024-07-13 13:48:43.875425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.299 [2024-07-13 13:48:43.875475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.299 [2024-07-13 13:48:43.875501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.299 [2024-07-13 13:48:43.875785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.299 [2024-07-13 13:48:43.876083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.299 [2024-07-13 13:48:43.876116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.299 [2024-07-13 13:48:43.876146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.299 [2024-07-13 13:48:43.880263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.299 [2024-07-13 13:48:43.889474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.299 [2024-07-13 13:48:43.889992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.299 [2024-07-13 13:48:43.890041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.299 [2024-07-13 13:48:43.890067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.299 [2024-07-13 13:48:43.890352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.299 [2024-07-13 13:48:43.890640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.299 [2024-07-13 13:48:43.890672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.299 [2024-07-13 13:48:43.890695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.299 [2024-07-13 13:48:43.894806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.299 [2024-07-13 13:48:43.904036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.299 [2024-07-13 13:48:43.904549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.299 [2024-07-13 13:48:43.904599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.299 [2024-07-13 13:48:43.904626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.299 [2024-07-13 13:48:43.904922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.299 [2024-07-13 13:48:43.905211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.299 [2024-07-13 13:48:43.905243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.299 [2024-07-13 13:48:43.905264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.299 [2024-07-13 13:48:43.909388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.299 [2024-07-13 13:48:43.918618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.299 [2024-07-13 13:48:43.919114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.299 [2024-07-13 13:48:43.919163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.299 [2024-07-13 13:48:43.919189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.299 [2024-07-13 13:48:43.919474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.299 [2024-07-13 13:48:43.919761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.299 [2024-07-13 13:48:43.919793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.299 [2024-07-13 13:48:43.919815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.299 [2024-07-13 13:48:43.923936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.299 [2024-07-13 13:48:43.933183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.299 [2024-07-13 13:48:43.933767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.299 [2024-07-13 13:48:43.933818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.299 [2024-07-13 13:48:43.933844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.299 [2024-07-13 13:48:43.934150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.299 [2024-07-13 13:48:43.934449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.299 [2024-07-13 13:48:43.934481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.299 [2024-07-13 13:48:43.934503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.299 [2024-07-13 13:48:43.938617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.299 [2024-07-13 13:48:43.947608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.299 [2024-07-13 13:48:43.948177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.299 [2024-07-13 13:48:43.948218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.299 [2024-07-13 13:48:43.948248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.299 [2024-07-13 13:48:43.948533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.299 [2024-07-13 13:48:43.948822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.299 [2024-07-13 13:48:43.948873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.299 [2024-07-13 13:48:43.948899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.299 [2024-07-13 13:48:43.953022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.299 [2024-07-13 13:48:43.962261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.299 [2024-07-13 13:48:43.962749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.299 [2024-07-13 13:48:43.962796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.299 [2024-07-13 13:48:43.962822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.299 [2024-07-13 13:48:43.963125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.299 [2024-07-13 13:48:43.963413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.299 [2024-07-13 13:48:43.963444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.299 [2024-07-13 13:48:43.963467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.299 [2024-07-13 13:48:43.967583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.299 [2024-07-13 13:48:43.976810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.299 [2024-07-13 13:48:43.977331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.299 [2024-07-13 13:48:43.977380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.299 [2024-07-13 13:48:43.977407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.299 [2024-07-13 13:48:43.977692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.299 [2024-07-13 13:48:43.977992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.299 [2024-07-13 13:48:43.978025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.299 [2024-07-13 13:48:43.978054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.299 [2024-07-13 13:48:43.982167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.299 [2024-07-13 13:48:43.991380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.299 [2024-07-13 13:48:43.991880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.300 [2024-07-13 13:48:43.991928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.300 [2024-07-13 13:48:43.991953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.300 [2024-07-13 13:48:43.992237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.300 [2024-07-13 13:48:43.992525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.300 [2024-07-13 13:48:43.992557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.300 [2024-07-13 13:48:43.992579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.300 [2024-07-13 13:48:43.996713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.300 [2024-07-13 13:48:44.005956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.300 [2024-07-13 13:48:44.006463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.300 [2024-07-13 13:48:44.006505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.300 [2024-07-13 13:48:44.006532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.300 [2024-07-13 13:48:44.006818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.300 [2024-07-13 13:48:44.007118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.300 [2024-07-13 13:48:44.007152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.300 [2024-07-13 13:48:44.007181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.300 [2024-07-13 13:48:44.011302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.300 [2024-07-13 13:48:44.020771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.300 [2024-07-13 13:48:44.021302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.300 [2024-07-13 13:48:44.021345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.300 [2024-07-13 13:48:44.021372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.300 [2024-07-13 13:48:44.021659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.300 [2024-07-13 13:48:44.021964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.300 [2024-07-13 13:48:44.021998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.300 [2024-07-13 13:48:44.022021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.300 [2024-07-13 13:48:44.026143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.300 [2024-07-13 13:48:44.035398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.300 [2024-07-13 13:48:44.035873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.300 [2024-07-13 13:48:44.035916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.300 [2024-07-13 13:48:44.035944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.300 [2024-07-13 13:48:44.036230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.300 [2024-07-13 13:48:44.036519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.300 [2024-07-13 13:48:44.036552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.300 [2024-07-13 13:48:44.036575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.300 [2024-07-13 13:48:44.040855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.560 [2024-07-13 13:48:44.050085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.560 [2024-07-13 13:48:44.050570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.560 [2024-07-13 13:48:44.050613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.560 [2024-07-13 13:48:44.050639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.560 [2024-07-13 13:48:44.050941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.560 [2024-07-13 13:48:44.051229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.560 [2024-07-13 13:48:44.051262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.560 [2024-07-13 13:48:44.051285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.560 [2024-07-13 13:48:44.055410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.560 [2024-07-13 13:48:44.064652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.065176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.065253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.065280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.065567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.065857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.065902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.065926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.070058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.561 [2024-07-13 13:48:44.079295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.079775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.079817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.079843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.080139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.080437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.080469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.080492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.084612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.561 [2024-07-13 13:48:44.093839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.094356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.094398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.094424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.094710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.095019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.095052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.095075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.099194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.561 [2024-07-13 13:48:44.108423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.108892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.108933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.108959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.109252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.109541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.109573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.109595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.113722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.561 [2024-07-13 13:48:44.122965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.123471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.123513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.123539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.123824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.124132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.124166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.124188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.128300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.561 [2024-07-13 13:48:44.137531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.138020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.138062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.138088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.138373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.138662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.138696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.138719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.142840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.561 [2024-07-13 13:48:44.152078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.152574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.152615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.152642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.152942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.153229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.153268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.153292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.157433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.561 [2024-07-13 13:48:44.166686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.167172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.167214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.167240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.167524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.167811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.167842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.167873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.172007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.561 [2024-07-13 13:48:44.181227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.181714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.181756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.181781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.182079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.182367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.182400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.182422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.186541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.561 [2024-07-13 13:48:44.195763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.196266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.196307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.196333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.196617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.196915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.196946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.196968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.201082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.561 [2024-07-13 13:48:44.210336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.210826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.210885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.210913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.211197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.211484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.211517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.211539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.215657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.561 [2024-07-13 13:48:44.224932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.225401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.225443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.225470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.225755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.226055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.226087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.226110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.230241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.561 [2024-07-13 13:48:44.239479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.239987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.240028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.240055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.240340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.240629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.240661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.240683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.244789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.561 [2024-07-13 13:48:44.254033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.254512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.254554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.254586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.254880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.255169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.255201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.255223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.259345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.561 [2024-07-13 13:48:44.268595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.269075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.269116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.269142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.269445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.269732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.269764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.269786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.273912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.561 [2024-07-13 13:48:44.283157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.283654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.283695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.283720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.284019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.284307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.284338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.284361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.288492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.561 [2024-07-13 13:48:44.297761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.561 [2024-07-13 13:48:44.298257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.561 [2024-07-13 13:48:44.298300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.561 [2024-07-13 13:48:44.298327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.561 [2024-07-13 13:48:44.298611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.561 [2024-07-13 13:48:44.298911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.561 [2024-07-13 13:48:44.298948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.561 [2024-07-13 13:48:44.298972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.561 [2024-07-13 13:48:44.303274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.821 [2024-07-13 13:48:44.312546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.821 [2024-07-13 13:48:44.313054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.821 [2024-07-13 13:48:44.313099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.822 [2024-07-13 13:48:44.313126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.822 [2024-07-13 13:48:44.313424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.822 [2024-07-13 13:48:44.313714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.822 [2024-07-13 13:48:44.313747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.822 [2024-07-13 13:48:44.313770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.822 [2024-07-13 13:48:44.317918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.822 [2024-07-13 13:48:44.327176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.822 [2024-07-13 13:48:44.327703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.822 [2024-07-13 13:48:44.327747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.822 [2024-07-13 13:48:44.327773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.822 [2024-07-13 13:48:44.328074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.822 [2024-07-13 13:48:44.328363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.822 [2024-07-13 13:48:44.328396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.822 [2024-07-13 13:48:44.328419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.822 [2024-07-13 13:48:44.332544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.822 [2024-07-13 13:48:44.341777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.822 [2024-07-13 13:48:44.342285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.822 [2024-07-13 13:48:44.342327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.822 [2024-07-13 13:48:44.342353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.822 [2024-07-13 13:48:44.342639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.822 [2024-07-13 13:48:44.342942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.822 [2024-07-13 13:48:44.342976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.822 [2024-07-13 13:48:44.342999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.822 [2024-07-13 13:48:44.347129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.822 [2024-07-13 13:48:44.356400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.822 [2024-07-13 13:48:44.356899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.822 [2024-07-13 13:48:44.356941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.822 [2024-07-13 13:48:44.356967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.822 [2024-07-13 13:48:44.357251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.822 [2024-07-13 13:48:44.357540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.822 [2024-07-13 13:48:44.357572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.822 [2024-07-13 13:48:44.357595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.822 [2024-07-13 13:48:44.361718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.822 [2024-07-13 13:48:44.370955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.822 [2024-07-13 13:48:44.371462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.822 [2024-07-13 13:48:44.371504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.822 [2024-07-13 13:48:44.371529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.822 [2024-07-13 13:48:44.371813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.822 [2024-07-13 13:48:44.372114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.822 [2024-07-13 13:48:44.372147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.822 [2024-07-13 13:48:44.372170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.822 [2024-07-13 13:48:44.376286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.822 [2024-07-13 13:48:44.385512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.822 [2024-07-13 13:48:44.386018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.822 [2024-07-13 13:48:44.386060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.822 [2024-07-13 13:48:44.386086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.822 [2024-07-13 13:48:44.386370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.822 [2024-07-13 13:48:44.386660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.822 [2024-07-13 13:48:44.386692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.822 [2024-07-13 13:48:44.386714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.822 [2024-07-13 13:48:44.390838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.822 [2024-07-13 13:48:44.400087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.822 [2024-07-13 13:48:44.400602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.822 [2024-07-13 13:48:44.400644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.822 [2024-07-13 13:48:44.400676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.822 [2024-07-13 13:48:44.400979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.822 [2024-07-13 13:48:44.401268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.822 [2024-07-13 13:48:44.401302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.822 [2024-07-13 13:48:44.401324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.822 [2024-07-13 13:48:44.405447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.822 [2024-07-13 13:48:44.414683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.822 [2024-07-13 13:48:44.415205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.822 [2024-07-13 13:48:44.415247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.822 [2024-07-13 13:48:44.415273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.822 [2024-07-13 13:48:44.415559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.822 [2024-07-13 13:48:44.415845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.822 [2024-07-13 13:48:44.415892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.822 [2024-07-13 13:48:44.415916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.822 [2024-07-13 13:48:44.420042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.822 [2024-07-13 13:48:44.429260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.822 [2024-07-13 13:48:44.429777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.822 [2024-07-13 13:48:44.429819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.822 [2024-07-13 13:48:44.429845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.822 [2024-07-13 13:48:44.430143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.822 [2024-07-13 13:48:44.430432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.822 [2024-07-13 13:48:44.430464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.822 [2024-07-13 13:48:44.430486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.822 [2024-07-13 13:48:44.434614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.822 [2024-07-13 13:48:44.443864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.822 [2024-07-13 13:48:44.444383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.822 [2024-07-13 13:48:44.444425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.822 [2024-07-13 13:48:44.444451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.822 [2024-07-13 13:48:44.444738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.822 [2024-07-13 13:48:44.445043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.822 [2024-07-13 13:48:44.445082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.822 [2024-07-13 13:48:44.445105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.822 [2024-07-13 13:48:44.449232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.822 [2024-07-13 13:48:44.458473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.822 [2024-07-13 13:48:44.458952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.822 [2024-07-13 13:48:44.458993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.822 [2024-07-13 13:48:44.459019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.822 [2024-07-13 13:48:44.459305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.822 [2024-07-13 13:48:44.459591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.822 [2024-07-13 13:48:44.459624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.822 [2024-07-13 13:48:44.459647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.823 [2024-07-13 13:48:44.463765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.823 [2024-07-13 13:48:44.473026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.823 [2024-07-13 13:48:44.473524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.823 [2024-07-13 13:48:44.473565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.823 [2024-07-13 13:48:44.473591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.823 [2024-07-13 13:48:44.473886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.823 [2024-07-13 13:48:44.474176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.823 [2024-07-13 13:48:44.474219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.823 [2024-07-13 13:48:44.474243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.823 [2024-07-13 13:48:44.478424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.823 [2024-07-13 13:48:44.487458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.823 [2024-07-13 13:48:44.487951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.823 [2024-07-13 13:48:44.487994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.823 [2024-07-13 13:48:44.488021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.823 [2024-07-13 13:48:44.488307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.823 [2024-07-13 13:48:44.488595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.823 [2024-07-13 13:48:44.488628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.823 [2024-07-13 13:48:44.488650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.823 [2024-07-13 13:48:44.492784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.823 [2024-07-13 13:48:44.502099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.823 [2024-07-13 13:48:44.502596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.823 [2024-07-13 13:48:44.502638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.823 [2024-07-13 13:48:44.502664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.823 [2024-07-13 13:48:44.502963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.823 [2024-07-13 13:48:44.503255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.823 [2024-07-13 13:48:44.503287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.823 [2024-07-13 13:48:44.503310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.823 [2024-07-13 13:48:44.507466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.823 [2024-07-13 13:48:44.516649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.823 [2024-07-13 13:48:44.517142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.823 [2024-07-13 13:48:44.517186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.823 [2024-07-13 13:48:44.517212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.823 [2024-07-13 13:48:44.517509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.823 [2024-07-13 13:48:44.517818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.823 [2024-07-13 13:48:44.517858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.823 [2024-07-13 13:48:44.517891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.823 [2024-07-13 13:48:44.522084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.823 [2024-07-13 13:48:44.531178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.823 [2024-07-13 13:48:44.531695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.823 [2024-07-13 13:48:44.531736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.823 [2024-07-13 13:48:44.531762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.823 [2024-07-13 13:48:44.532057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.823 [2024-07-13 13:48:44.532345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.823 [2024-07-13 13:48:44.532378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.823 [2024-07-13 13:48:44.532400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.823 [2024-07-13 13:48:44.536540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.823 [2024-07-13 13:48:44.545907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.823 [2024-07-13 13:48:44.546420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.823 [2024-07-13 13:48:44.546462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.823 [2024-07-13 13:48:44.546494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.823 [2024-07-13 13:48:44.546780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.823 [2024-07-13 13:48:44.547081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.823 [2024-07-13 13:48:44.547114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.823 [2024-07-13 13:48:44.547136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:09.823 [2024-07-13 13:48:44.551290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:09.823 [2024-07-13 13:48:44.560374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:09.823 [2024-07-13 13:48:44.560964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:09.823 [2024-07-13 13:48:44.561018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:09.823 [2024-07-13 13:48:44.561047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:09.823 [2024-07-13 13:48:44.561336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:09.823 [2024-07-13 13:48:44.561659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:09.823 [2024-07-13 13:48:44.561708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:09.823 [2024-07-13 13:48:44.561738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.082 [2024-07-13 13:48:44.566248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.082 [2024-07-13 13:48:44.575139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.082 [2024-07-13 13:48:44.575682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.082 [2024-07-13 13:48:44.575753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.082 [2024-07-13 13:48:44.575781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.082 [2024-07-13 13:48:44.576079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.082 [2024-07-13 13:48:44.576369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.083 [2024-07-13 13:48:44.576401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.083 [2024-07-13 13:48:44.576423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.083 [2024-07-13 13:48:44.580587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.083 [2024-07-13 13:48:44.589653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.083 [2024-07-13 13:48:44.590140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.083 [2024-07-13 13:48:44.590182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.083 [2024-07-13 13:48:44.590209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.083 [2024-07-13 13:48:44.590495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.083 [2024-07-13 13:48:44.590792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.083 [2024-07-13 13:48:44.590830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.083 [2024-07-13 13:48:44.590853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.083 [2024-07-13 13:48:44.595045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.083 [2024-07-13 13:48:44.604178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.083 [2024-07-13 13:48:44.604680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.083 [2024-07-13 13:48:44.604722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.083 [2024-07-13 13:48:44.604748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.083 [2024-07-13 13:48:44.605049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.083 [2024-07-13 13:48:44.605339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.083 [2024-07-13 13:48:44.605371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.083 [2024-07-13 13:48:44.605394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.083 [2024-07-13 13:48:44.609557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.083 [2024-07-13 13:48:44.618863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.083 [2024-07-13 13:48:44.619381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.083 [2024-07-13 13:48:44.619422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.083 [2024-07-13 13:48:44.619449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.083 [2024-07-13 13:48:44.619735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.083 [2024-07-13 13:48:44.620047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.083 [2024-07-13 13:48:44.620080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.083 [2024-07-13 13:48:44.620102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.083 [2024-07-13 13:48:44.624258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.083 [2024-07-13 13:48:44.633541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.083 [2024-07-13 13:48:44.634057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.083 [2024-07-13 13:48:44.634099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.083 [2024-07-13 13:48:44.634126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.083 [2024-07-13 13:48:44.634412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.083 [2024-07-13 13:48:44.634701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.083 [2024-07-13 13:48:44.634733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.083 [2024-07-13 13:48:44.634755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.083 [2024-07-13 13:48:44.638925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.083 [2024-07-13 13:48:44.648201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.083 [2024-07-13 13:48:44.648714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.083 [2024-07-13 13:48:44.648756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.083 [2024-07-13 13:48:44.648783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.083 [2024-07-13 13:48:44.649084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.083 [2024-07-13 13:48:44.649374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.083 [2024-07-13 13:48:44.649406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.083 [2024-07-13 13:48:44.649430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.083 [2024-07-13 13:48:44.653570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.083 [2024-07-13 13:48:44.662845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.083 [2024-07-13 13:48:44.663344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.083 [2024-07-13 13:48:44.663384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.083 [2024-07-13 13:48:44.663410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.083 [2024-07-13 13:48:44.663697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.083 [2024-07-13 13:48:44.664001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.083 [2024-07-13 13:48:44.664035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.083 [2024-07-13 13:48:44.664057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.083 [2024-07-13 13:48:44.668199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.083 [2024-07-13 13:48:44.677470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.083 [2024-07-13 13:48:44.677990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.083 [2024-07-13 13:48:44.678032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.083 [2024-07-13 13:48:44.678058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.083 [2024-07-13 13:48:44.678346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.083 [2024-07-13 13:48:44.678636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.083 [2024-07-13 13:48:44.678668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.083 [2024-07-13 13:48:44.678690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.083 [2024-07-13 13:48:44.682854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.083 [2024-07-13 13:48:44.691930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.083 [2024-07-13 13:48:44.692441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.083 [2024-07-13 13:48:44.692483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.083 [2024-07-13 13:48:44.692514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.083 [2024-07-13 13:48:44.692802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.083 [2024-07-13 13:48:44.693106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.083 [2024-07-13 13:48:44.693139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.083 [2024-07-13 13:48:44.693161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.083 [2024-07-13 13:48:44.697303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.083 [2024-07-13 13:48:44.706485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.083 [2024-07-13 13:48:44.706997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.083 [2024-07-13 13:48:44.707040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.083 [2024-07-13 13:48:44.707067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.083 [2024-07-13 13:48:44.707354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.083 [2024-07-13 13:48:44.707645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.083 [2024-07-13 13:48:44.707677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.083 [2024-07-13 13:48:44.707699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.083 [2024-07-13 13:48:44.711845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.083 [2024-07-13 13:48:44.721139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.083 [2024-07-13 13:48:44.721646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.083 [2024-07-13 13:48:44.721688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.083 [2024-07-13 13:48:44.721714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.083 [2024-07-13 13:48:44.722015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.083 [2024-07-13 13:48:44.722303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.083 [2024-07-13 13:48:44.722337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.083 [2024-07-13 13:48:44.722359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.083 [2024-07-13 13:48:44.726489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.083 [2024-07-13 13:48:44.735754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.083 [2024-07-13 13:48:44.736260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.084 [2024-07-13 13:48:44.736302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.084 [2024-07-13 13:48:44.736329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.084 [2024-07-13 13:48:44.736613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.084 [2024-07-13 13:48:44.736926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.084 [2024-07-13 13:48:44.736959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.084 [2024-07-13 13:48:44.736982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.084 [2024-07-13 13:48:44.741118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.084 [2024-07-13 13:48:44.750394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.084 [2024-07-13 13:48:44.750891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.084 [2024-07-13 13:48:44.750933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.084 [2024-07-13 13:48:44.750960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.084 [2024-07-13 13:48:44.751246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.084 [2024-07-13 13:48:44.751534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.084 [2024-07-13 13:48:44.751566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.084 [2024-07-13 13:48:44.751589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.084 [2024-07-13 13:48:44.755727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.084 [2024-07-13 13:48:44.764993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.084 [2024-07-13 13:48:44.765512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.084 [2024-07-13 13:48:44.765554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.084 [2024-07-13 13:48:44.765581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.084 [2024-07-13 13:48:44.765880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.084 [2024-07-13 13:48:44.766170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.084 [2024-07-13 13:48:44.766203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.084 [2024-07-13 13:48:44.766226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.084 [2024-07-13 13:48:44.770350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.084 [2024-07-13 13:48:44.779592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.084 [2024-07-13 13:48:44.780074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.084 [2024-07-13 13:48:44.780115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.084 [2024-07-13 13:48:44.780141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.084 [2024-07-13 13:48:44.780427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.084 [2024-07-13 13:48:44.780716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.084 [2024-07-13 13:48:44.780748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.084 [2024-07-13 13:48:44.780772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.084 [2024-07-13 13:48:44.784923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.084 [2024-07-13 13:48:44.794179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.084 [2024-07-13 13:48:44.794688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.084 [2024-07-13 13:48:44.794730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.084 [2024-07-13 13:48:44.794756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.084 [2024-07-13 13:48:44.795062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.084 [2024-07-13 13:48:44.795349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.084 [2024-07-13 13:48:44.795382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.084 [2024-07-13 13:48:44.795405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.084 [2024-07-13 13:48:44.799542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.084 [2024-07-13 13:48:44.808817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.084 [2024-07-13 13:48:44.809323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.084 [2024-07-13 13:48:44.809364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.084 [2024-07-13 13:48:44.809391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.084 [2024-07-13 13:48:44.809675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.084 [2024-07-13 13:48:44.809976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.084 [2024-07-13 13:48:44.810008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.084 [2024-07-13 13:48:44.810031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.084 [2024-07-13 13:48:44.814148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.084 [2024-07-13 13:48:44.823539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.084 [2024-07-13 13:48:44.824061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.084 [2024-07-13 13:48:44.824106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.084 [2024-07-13 13:48:44.824133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.084 [2024-07-13 13:48:44.824464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.084 [2024-07-13 13:48:44.824803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.084 [2024-07-13 13:48:44.824839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.084 [2024-07-13 13:48:44.824863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.344 [2024-07-13 13:48:44.829316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.344 [2024-07-13 13:48:44.838052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.344 [2024-07-13 13:48:44.838570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.344 [2024-07-13 13:48:44.838615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.344 [2024-07-13 13:48:44.838648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.344 [2024-07-13 13:48:44.838951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.344 [2024-07-13 13:48:44.839241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.344 [2024-07-13 13:48:44.839274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.344 [2024-07-13 13:48:44.839296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.344 [2024-07-13 13:48:44.843424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.344 [2024-07-13 13:48:44.852666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.344 [2024-07-13 13:48:44.853196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.344 [2024-07-13 13:48:44.853239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.344 [2024-07-13 13:48:44.853265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.344 [2024-07-13 13:48:44.853550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.344 [2024-07-13 13:48:44.853839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.344 [2024-07-13 13:48:44.853885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.344 [2024-07-13 13:48:44.853911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.344 [2024-07-13 13:48:44.858033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.344 [2024-07-13 13:48:44.867259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.344 [2024-07-13 13:48:44.867763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.344 [2024-07-13 13:48:44.867805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.344 [2024-07-13 13:48:44.867831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.344 [2024-07-13 13:48:44.868127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.344 [2024-07-13 13:48:44.868415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.344 [2024-07-13 13:48:44.868448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.344 [2024-07-13 13:48:44.868472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.344 [2024-07-13 13:48:44.872602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.344 [2024-07-13 13:48:44.881823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.344 [2024-07-13 13:48:44.882352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.344 [2024-07-13 13:48:44.882395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.344 [2024-07-13 13:48:44.882421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.344 [2024-07-13 13:48:44.882707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.344 [2024-07-13 13:48:44.883017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.344 [2024-07-13 13:48:44.883051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.344 [2024-07-13 13:48:44.883075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.344 [2024-07-13 13:48:44.887192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.344 [2024-07-13 13:48:44.896411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.344 [2024-07-13 13:48:44.896967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.344 [2024-07-13 13:48:44.897010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.344 [2024-07-13 13:48:44.897037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.344 [2024-07-13 13:48:44.897323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.344 [2024-07-13 13:48:44.897611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.344 [2024-07-13 13:48:44.897644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.344 [2024-07-13 13:48:44.897667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.344 [2024-07-13 13:48:44.901787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.344 [2024-07-13 13:48:44.911037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.344 [2024-07-13 13:48:44.911556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.344 [2024-07-13 13:48:44.911598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.344 [2024-07-13 13:48:44.911624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.344 [2024-07-13 13:48:44.911925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.344 [2024-07-13 13:48:44.912215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.345 [2024-07-13 13:48:44.912248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.345 [2024-07-13 13:48:44.912270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.345 [2024-07-13 13:48:44.916403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.345 [2024-07-13 13:48:44.925651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.345 [2024-07-13 13:48:44.926160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.345 [2024-07-13 13:48:44.926202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.345 [2024-07-13 13:48:44.926229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.345 [2024-07-13 13:48:44.926516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.345 [2024-07-13 13:48:44.926803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.345 [2024-07-13 13:48:44.926835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.345 [2024-07-13 13:48:44.926858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.345 [2024-07-13 13:48:44.931011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.345 [2024-07-13 13:48:44.940266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.345 [2024-07-13 13:48:44.940798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.345 [2024-07-13 13:48:44.940840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.345 [2024-07-13 13:48:44.940874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.345 [2024-07-13 13:48:44.941163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.345 [2024-07-13 13:48:44.941452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.345 [2024-07-13 13:48:44.941485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.345 [2024-07-13 13:48:44.941507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.345 [2024-07-13 13:48:44.945643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.345 [2024-07-13 13:48:44.954877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.345 [2024-07-13 13:48:44.955380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.345 [2024-07-13 13:48:44.955423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.345 [2024-07-13 13:48:44.955449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.345 [2024-07-13 13:48:44.955734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.345 [2024-07-13 13:48:44.956036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.345 [2024-07-13 13:48:44.956069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.345 [2024-07-13 13:48:44.956091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.345 [2024-07-13 13:48:44.960198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.345 [2024-07-13 13:48:44.969420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.345 [2024-07-13 13:48:44.969901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.345 [2024-07-13 13:48:44.969943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.345 [2024-07-13 13:48:44.969969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.345 [2024-07-13 13:48:44.970253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.345 [2024-07-13 13:48:44.970541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.345 [2024-07-13 13:48:44.970574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.345 [2024-07-13 13:48:44.970597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.345 [2024-07-13 13:48:44.974730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.345 [2024-07-13 13:48:44.983972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.345 [2024-07-13 13:48:44.984468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.345 [2024-07-13 13:48:44.984515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.345 [2024-07-13 13:48:44.984542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.345 [2024-07-13 13:48:44.984827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.345 [2024-07-13 13:48:44.985127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.345 [2024-07-13 13:48:44.985160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.345 [2024-07-13 13:48:44.985183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.345 [2024-07-13 13:48:44.989306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.345 [2024-07-13 13:48:44.998552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.345 [2024-07-13 13:48:44.999068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.345 [2024-07-13 13:48:44.999110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.345 [2024-07-13 13:48:44.999136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.345 [2024-07-13 13:48:44.999423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.345 [2024-07-13 13:48:44.999712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.345 [2024-07-13 13:48:44.999744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.345 [2024-07-13 13:48:44.999767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.345 [2024-07-13 13:48:45.003894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.345 [2024-07-13 13:48:45.013124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.345 [2024-07-13 13:48:45.013618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.345 [2024-07-13 13:48:45.013660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.345 [2024-07-13 13:48:45.013687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.345 [2024-07-13 13:48:45.013986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.345 [2024-07-13 13:48:45.014273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.345 [2024-07-13 13:48:45.014306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.345 [2024-07-13 13:48:45.014328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.345 [2024-07-13 13:48:45.018454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.345 [2024-07-13 13:48:45.027707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.345 [2024-07-13 13:48:45.028221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.345 [2024-07-13 13:48:45.028263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.345 [2024-07-13 13:48:45.028290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.345 [2024-07-13 13:48:45.028575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.345 [2024-07-13 13:48:45.028879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.345 [2024-07-13 13:48:45.028923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.345 [2024-07-13 13:48:45.028945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.345 [2024-07-13 13:48:45.033067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.345 [2024-07-13 13:48:45.042326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.345 [2024-07-13 13:48:45.042808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.345 [2024-07-13 13:48:45.042850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.345 [2024-07-13 13:48:45.042886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.345 [2024-07-13 13:48:45.043173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.345 [2024-07-13 13:48:45.043468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.345 [2024-07-13 13:48:45.043502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.345 [2024-07-13 13:48:45.043525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.345 [2024-07-13 13:48:45.047652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.345 [2024-07-13 13:48:45.056937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.345 [2024-07-13 13:48:45.057415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.345 [2024-07-13 13:48:45.057462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.345 [2024-07-13 13:48:45.057490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.345 [2024-07-13 13:48:45.057776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.345 [2024-07-13 13:48:45.058078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.345 [2024-07-13 13:48:45.058111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.345 [2024-07-13 13:48:45.058145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.345 [2024-07-13 13:48:45.062295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.345 [2024-07-13 13:48:45.071568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.345 [2024-07-13 13:48:45.072086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.345 [2024-07-13 13:48:45.072127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.345 [2024-07-13 13:48:45.072154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.346 [2024-07-13 13:48:45.072439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.346 [2024-07-13 13:48:45.072728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.346 [2024-07-13 13:48:45.072761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.346 [2024-07-13 13:48:45.072784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.346 [2024-07-13 13:48:45.076944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.346 [2024-07-13 13:48:45.086529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.346 [2024-07-13 13:48:45.087071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.346 [2024-07-13 13:48:45.087116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.346 [2024-07-13 13:48:45.087144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.346 [2024-07-13 13:48:45.087433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.346 [2024-07-13 13:48:45.087758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.346 [2024-07-13 13:48:45.087797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.346 [2024-07-13 13:48:45.087842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.605 [2024-07-13 13:48:45.092224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.605 [2024-07-13 13:48:45.101211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.605 [2024-07-13 13:48:45.101722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.605 [2024-07-13 13:48:45.101766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.605 [2024-07-13 13:48:45.101807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.605 [2024-07-13 13:48:45.102111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.605 [2024-07-13 13:48:45.102407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.605 [2024-07-13 13:48:45.102441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.605 [2024-07-13 13:48:45.102463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.605 [2024-07-13 13:48:45.106645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.605 [2024-07-13 13:48:45.115745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.605 [2024-07-13 13:48:45.116270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.605 [2024-07-13 13:48:45.116312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.605 [2024-07-13 13:48:45.116338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.605 [2024-07-13 13:48:45.116624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.605 [2024-07-13 13:48:45.116926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.605 [2024-07-13 13:48:45.116960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.605 [2024-07-13 13:48:45.116983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.605 [2024-07-13 13:48:45.121133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.605 [2024-07-13 13:48:45.130420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.606 [2024-07-13 13:48:45.130925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.606 [2024-07-13 13:48:45.130974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.606 [2024-07-13 13:48:45.131002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.606 [2024-07-13 13:48:45.131290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.606 [2024-07-13 13:48:45.131584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.606 [2024-07-13 13:48:45.131616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.606 [2024-07-13 13:48:45.131638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.606 [2024-07-13 13:48:45.135796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.606 [2024-07-13 13:48:45.145096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.606 [2024-07-13 13:48:45.145585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.606 [2024-07-13 13:48:45.145627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.606 [2024-07-13 13:48:45.145654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.606 [2024-07-13 13:48:45.145955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.606 [2024-07-13 13:48:45.146247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.606 [2024-07-13 13:48:45.146279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.606 [2024-07-13 13:48:45.146301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.606 [2024-07-13 13:48:45.150438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.606 [2024-07-13 13:48:45.159723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.606 [2024-07-13 13:48:45.160234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.606 [2024-07-13 13:48:45.160277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.606 [2024-07-13 13:48:45.160304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.606 [2024-07-13 13:48:45.160592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.606 [2024-07-13 13:48:45.160895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.606 [2024-07-13 13:48:45.160928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.606 [2024-07-13 13:48:45.160950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.606 [2024-07-13 13:48:45.165105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.606 [2024-07-13 13:48:45.174426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.606 [2024-07-13 13:48:45.174908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.606 [2024-07-13 13:48:45.174950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.606 [2024-07-13 13:48:45.174977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.606 [2024-07-13 13:48:45.175267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.606 [2024-07-13 13:48:45.175564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.606 [2024-07-13 13:48:45.175598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.606 [2024-07-13 13:48:45.175621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.606 [2024-07-13 13:48:45.179782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.606 [2024-07-13 13:48:45.189108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.606 [2024-07-13 13:48:45.189654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.606 [2024-07-13 13:48:45.189696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.606 [2024-07-13 13:48:45.189722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.606 [2024-07-13 13:48:45.190018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.606 [2024-07-13 13:48:45.190309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.606 [2024-07-13 13:48:45.190341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.606 [2024-07-13 13:48:45.190364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.606 [2024-07-13 13:48:45.194529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.606 [2024-07-13 13:48:45.203618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.606 [2024-07-13 13:48:45.204129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.606 [2024-07-13 13:48:45.204171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.606 [2024-07-13 13:48:45.204198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.606 [2024-07-13 13:48:45.204485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.606 [2024-07-13 13:48:45.204775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.606 [2024-07-13 13:48:45.204808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.606 [2024-07-13 13:48:45.204831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.606 [2024-07-13 13:48:45.208993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.606 [2024-07-13 13:48:45.218087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.606 [2024-07-13 13:48:45.218604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.606 [2024-07-13 13:48:45.218645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.606 [2024-07-13 13:48:45.218672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.606 [2024-07-13 13:48:45.218973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.606 [2024-07-13 13:48:45.219262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.606 [2024-07-13 13:48:45.219295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.606 [2024-07-13 13:48:45.219324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.606 [2024-07-13 13:48:45.223487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.606 [2024-07-13 13:48:45.232559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.606 [2024-07-13 13:48:45.233088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.606 [2024-07-13 13:48:45.233131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.606 [2024-07-13 13:48:45.233157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.606 [2024-07-13 13:48:45.233445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.606 [2024-07-13 13:48:45.233737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.606 [2024-07-13 13:48:45.233770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.606 [2024-07-13 13:48:45.233793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.606 [2024-07-13 13:48:45.237969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.606 [2024-07-13 13:48:45.247122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.606 [2024-07-13 13:48:45.247626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.606 [2024-07-13 13:48:45.247667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.606 [2024-07-13 13:48:45.247694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.606 [2024-07-13 13:48:45.247990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.606 [2024-07-13 13:48:45.248290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.606 [2024-07-13 13:48:45.248324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.606 [2024-07-13 13:48:45.248347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.606 [2024-07-13 13:48:45.252500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.606 [2024-07-13 13:48:45.261582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.606 [2024-07-13 13:48:45.262100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.606 [2024-07-13 13:48:45.262143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.606 [2024-07-13 13:48:45.262170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.606 [2024-07-13 13:48:45.262459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.606 [2024-07-13 13:48:45.262749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.606 [2024-07-13 13:48:45.262782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.606 [2024-07-13 13:48:45.262805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.606 [2024-07-13 13:48:45.266970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.606 [2024-07-13 13:48:45.276079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.606 [2024-07-13 13:48:45.276565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.606 [2024-07-13 13:48:45.276612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.606 [2024-07-13 13:48:45.276639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.606 [2024-07-13 13:48:45.276940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.606 [2024-07-13 13:48:45.277229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.607 [2024-07-13 13:48:45.277262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.607 [2024-07-13 13:48:45.277285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.607 [2024-07-13 13:48:45.281452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.607 [2024-07-13 13:48:45.290752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.607 [2024-07-13 13:48:45.291242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.607 [2024-07-13 13:48:45.291285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.607 [2024-07-13 13:48:45.291312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.607 [2024-07-13 13:48:45.291600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.607 [2024-07-13 13:48:45.291903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.607 [2024-07-13 13:48:45.291936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.607 [2024-07-13 13:48:45.291959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.607 [2024-07-13 13:48:45.296133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.607 [2024-07-13 13:48:45.305236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.607 [2024-07-13 13:48:45.305757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.607 [2024-07-13 13:48:45.305798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.607 [2024-07-13 13:48:45.305825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.607 [2024-07-13 13:48:45.306124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.607 [2024-07-13 13:48:45.306428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.607 [2024-07-13 13:48:45.306460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.607 [2024-07-13 13:48:45.306482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.607 [2024-07-13 13:48:45.310652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.607 [2024-07-13 13:48:45.319745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.607 [2024-07-13 13:48:45.320254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.607 [2024-07-13 13:48:45.320295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.607 [2024-07-13 13:48:45.320321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.607 [2024-07-13 13:48:45.320609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.607 [2024-07-13 13:48:45.320920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.607 [2024-07-13 13:48:45.320953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.607 [2024-07-13 13:48:45.320976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.607 [2024-07-13 13:48:45.325127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.607 [2024-07-13 13:48:45.334249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.607 [2024-07-13 13:48:45.334764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.607 [2024-07-13 13:48:45.334816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.607 [2024-07-13 13:48:45.334842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.607 [2024-07-13 13:48:45.335148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.607 [2024-07-13 13:48:45.335455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.607 [2024-07-13 13:48:45.335487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.607 [2024-07-13 13:48:45.335510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.607 [2024-07-13 13:48:45.339693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.607 [2024-07-13 13:48:45.349176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.866 [2024-07-13 13:48:45.349777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.866 [2024-07-13 13:48:45.349864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.866 [2024-07-13 13:48:45.349916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.867 [2024-07-13 13:48:45.350251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.867 [2024-07-13 13:48:45.350578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.867 [2024-07-13 13:48:45.350612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.867 [2024-07-13 13:48:45.350636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.867 [2024-07-13 13:48:45.355003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.867 [2024-07-13 13:48:45.363912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.867 [2024-07-13 13:48:45.364447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.867 [2024-07-13 13:48:45.364499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.867 [2024-07-13 13:48:45.364526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.867 [2024-07-13 13:48:45.364813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.867 [2024-07-13 13:48:45.365117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.867 [2024-07-13 13:48:45.365150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.867 [2024-07-13 13:48:45.365178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.867 [2024-07-13 13:48:45.369371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.867 [2024-07-13 13:48:45.378427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.867 [2024-07-13 13:48:45.378963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.867 [2024-07-13 13:48:45.379014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.867 [2024-07-13 13:48:45.379041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.867 [2024-07-13 13:48:45.379333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.867 [2024-07-13 13:48:45.379632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.867 [2024-07-13 13:48:45.379664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.867 [2024-07-13 13:48:45.379687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.867 [2024-07-13 13:48:45.383827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.867 [2024-07-13 13:48:45.392881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.867 [2024-07-13 13:48:45.393400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.867 [2024-07-13 13:48:45.393452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.867 [2024-07-13 13:48:45.393479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.867 [2024-07-13 13:48:45.393766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.867 [2024-07-13 13:48:45.394066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.867 [2024-07-13 13:48:45.394099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.867 [2024-07-13 13:48:45.394121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.867 [2024-07-13 13:48:45.398269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.867 [2024-07-13 13:48:45.407530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.867 [2024-07-13 13:48:45.408051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.867 [2024-07-13 13:48:45.408103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.867 [2024-07-13 13:48:45.408130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.867 [2024-07-13 13:48:45.408425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.867 [2024-07-13 13:48:45.408713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.867 [2024-07-13 13:48:45.408745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.867 [2024-07-13 13:48:45.408768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.867 [2024-07-13 13:48:45.412907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.867 [2024-07-13 13:48:45.422168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.867 [2024-07-13 13:48:45.422763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.867 [2024-07-13 13:48:45.422830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.867 [2024-07-13 13:48:45.422856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.867 [2024-07-13 13:48:45.423162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.867 [2024-07-13 13:48:45.423458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.867 [2024-07-13 13:48:45.423490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.867 [2024-07-13 13:48:45.423512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.867 [2024-07-13 13:48:45.427655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.867 [2024-07-13 13:48:45.436707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.867 [2024-07-13 13:48:45.437213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.867 [2024-07-13 13:48:45.437266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.867 [2024-07-13 13:48:45.437293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.867 [2024-07-13 13:48:45.437578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.867 [2024-07-13 13:48:45.437877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.867 [2024-07-13 13:48:45.437910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.867 [2024-07-13 13:48:45.437931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.867 [2024-07-13 13:48:45.442083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.867 [2024-07-13 13:48:45.451389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.867 [2024-07-13 13:48:45.451884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.867 [2024-07-13 13:48:45.451936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.867 [2024-07-13 13:48:45.451962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.867 [2024-07-13 13:48:45.452248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.867 [2024-07-13 13:48:45.452538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.867 [2024-07-13 13:48:45.452569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.867 [2024-07-13 13:48:45.452592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.867 [2024-07-13 13:48:45.456740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.867 [2024-07-13 13:48:45.465981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.867 [2024-07-13 13:48:45.466503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.867 [2024-07-13 13:48:45.466553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.867 [2024-07-13 13:48:45.466579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.867 [2024-07-13 13:48:45.466878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.867 [2024-07-13 13:48:45.467168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.867 [2024-07-13 13:48:45.467201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.867 [2024-07-13 13:48:45.467223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.867 [2024-07-13 13:48:45.471356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.867 [2024-07-13 13:48:45.480621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.867 [2024-07-13 13:48:45.481118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.867 [2024-07-13 13:48:45.481166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.867 [2024-07-13 13:48:45.481192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.867 [2024-07-13 13:48:45.481476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.867 [2024-07-13 13:48:45.481764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.867 [2024-07-13 13:48:45.481796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.867 [2024-07-13 13:48:45.481819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.867 [2024-07-13 13:48:45.485947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.867 [2024-07-13 13:48:45.495197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.867 [2024-07-13 13:48:45.495718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.867 [2024-07-13 13:48:45.495769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.867 [2024-07-13 13:48:45.495796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.867 [2024-07-13 13:48:45.496093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.867 [2024-07-13 13:48:45.496382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.867 [2024-07-13 13:48:45.496415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.867 [2024-07-13 13:48:45.496437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.867 [2024-07-13 13:48:45.500564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.868 [2024-07-13 13:48:45.509801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.868 [2024-07-13 13:48:45.510317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.868 [2024-07-13 13:48:45.510367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.868 [2024-07-13 13:48:45.510394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.868 [2024-07-13 13:48:45.510680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.868 [2024-07-13 13:48:45.510981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.868 [2024-07-13 13:48:45.511013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.868 [2024-07-13 13:48:45.511054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.868 [2024-07-13 13:48:45.515187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.868 [2024-07-13 13:48:45.524448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.868 [2024-07-13 13:48:45.524958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.868 [2024-07-13 13:48:45.525014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.868 [2024-07-13 13:48:45.525042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.868 [2024-07-13 13:48:45.525328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.868 [2024-07-13 13:48:45.525617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.868 [2024-07-13 13:48:45.525649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.868 [2024-07-13 13:48:45.525672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.868 [2024-07-13 13:48:45.529801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.868 [2024-07-13 13:48:45.539052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.868 [2024-07-13 13:48:45.539583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.868 [2024-07-13 13:48:45.539633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.868 [2024-07-13 13:48:45.539660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.868 [2024-07-13 13:48:45.539963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.868 [2024-07-13 13:48:45.540251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.868 [2024-07-13 13:48:45.540288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.868 [2024-07-13 13:48:45.540311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.868 [2024-07-13 13:48:45.544423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.868 [2024-07-13 13:48:45.553650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.868 [2024-07-13 13:48:45.554175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.868 [2024-07-13 13:48:45.554224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.868 [2024-07-13 13:48:45.554251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.868 [2024-07-13 13:48:45.554535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.868 [2024-07-13 13:48:45.554823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.868 [2024-07-13 13:48:45.554854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.868 [2024-07-13 13:48:45.554895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.868 [2024-07-13 13:48:45.559017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.868 [2024-07-13 13:48:45.568263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.868 [2024-07-13 13:48:45.568781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.868 [2024-07-13 13:48:45.568832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.868 [2024-07-13 13:48:45.568858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.868 [2024-07-13 13:48:45.569166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.868 [2024-07-13 13:48:45.569460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.868 [2024-07-13 13:48:45.569492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.868 [2024-07-13 13:48:45.569515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.868 [2024-07-13 13:48:45.573624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.868 [2024-07-13 13:48:45.582872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.868 [2024-07-13 13:48:45.583402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.868 [2024-07-13 13:48:45.583451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.868 [2024-07-13 13:48:45.583478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.868 [2024-07-13 13:48:45.583762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.868 [2024-07-13 13:48:45.584064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.868 [2024-07-13 13:48:45.584097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.868 [2024-07-13 13:48:45.584126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.868 [2024-07-13 13:48:45.588237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:10.868 [2024-07-13 13:48:45.597468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:10.868 [2024-07-13 13:48:45.597980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:10.868 [2024-07-13 13:48:45.598022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:10.868 [2024-07-13 13:48:45.598049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:10.868 [2024-07-13 13:48:45.598342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:10.868 [2024-07-13 13:48:45.598629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:10.868 [2024-07-13 13:48:45.598662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:10.868 [2024-07-13 13:48:45.598685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:10.868 [2024-07-13 13:48:45.602805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.127 [2024-07-13 13:48:45.612359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.127 [2024-07-13 13:48:45.612963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.127 [2024-07-13 13:48:45.613008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.127 [2024-07-13 13:48:45.613036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.128 [2024-07-13 13:48:45.613327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.128 [2024-07-13 13:48:45.613617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.128 [2024-07-13 13:48:45.613660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.128 [2024-07-13 13:48:45.613682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.128 [2024-07-13 13:48:45.618028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.128 [2024-07-13 13:48:45.626830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.128 [2024-07-13 13:48:45.627339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.128 [2024-07-13 13:48:45.627391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.128 [2024-07-13 13:48:45.627418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.128 [2024-07-13 13:48:45.627706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.128 [2024-07-13 13:48:45.628005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.128 [2024-07-13 13:48:45.628038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.128 [2024-07-13 13:48:45.628061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.128 [2024-07-13 13:48:45.632205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.128 [2024-07-13 13:48:45.641477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.128 [2024-07-13 13:48:45.641967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.128 [2024-07-13 13:48:45.642008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.128 [2024-07-13 13:48:45.642035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.128 [2024-07-13 13:48:45.642330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.128 [2024-07-13 13:48:45.642620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.128 [2024-07-13 13:48:45.642652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.128 [2024-07-13 13:48:45.642675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.128 [2024-07-13 13:48:45.646837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.128 [2024-07-13 13:48:45.656155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.128 [2024-07-13 13:48:45.656658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.128 [2024-07-13 13:48:45.656709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.128 [2024-07-13 13:48:45.656736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.128 [2024-07-13 13:48:45.657030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.128 [2024-07-13 13:48:45.657318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.128 [2024-07-13 13:48:45.657350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.128 [2024-07-13 13:48:45.657381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.128 [2024-07-13 13:48:45.661585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.128 [2024-07-13 13:48:45.670714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.128 [2024-07-13 13:48:45.671204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.128 [2024-07-13 13:48:45.671246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.128 [2024-07-13 13:48:45.671273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.128 [2024-07-13 13:48:45.671559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.128 [2024-07-13 13:48:45.671847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.128 [2024-07-13 13:48:45.671887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.128 [2024-07-13 13:48:45.671911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.128 [2024-07-13 13:48:45.676054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.128 [2024-07-13 13:48:45.685374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.128 [2024-07-13 13:48:45.685911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.128 [2024-07-13 13:48:45.685954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.128 [2024-07-13 13:48:45.685980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.128 [2024-07-13 13:48:45.686267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.128 [2024-07-13 13:48:45.686556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.128 [2024-07-13 13:48:45.686588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.128 [2024-07-13 13:48:45.686611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.128 [2024-07-13 13:48:45.690759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.128 [2024-07-13 13:48:45.699848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.128 [2024-07-13 13:48:45.700335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.128 [2024-07-13 13:48:45.700385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.128 [2024-07-13 13:48:45.700411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.128 [2024-07-13 13:48:45.700696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.128 [2024-07-13 13:48:45.700996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.128 [2024-07-13 13:48:45.701028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.128 [2024-07-13 13:48:45.701051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.128 [2024-07-13 13:48:45.705226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.128 [2024-07-13 13:48:45.714576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.128 [2024-07-13 13:48:45.715055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.128 [2024-07-13 13:48:45.715097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.128 [2024-07-13 13:48:45.715123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.128 [2024-07-13 13:48:45.715409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.128 [2024-07-13 13:48:45.715699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.128 [2024-07-13 13:48:45.715732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.128 [2024-07-13 13:48:45.715755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.128 [2024-07-13 13:48:45.719923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.128 [2024-07-13 13:48:45.729239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.128 [2024-07-13 13:48:45.729745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.128 [2024-07-13 13:48:45.729797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.128 [2024-07-13 13:48:45.729824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.128 [2024-07-13 13:48:45.730121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.128 [2024-07-13 13:48:45.730422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.128 [2024-07-13 13:48:45.730454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.128 [2024-07-13 13:48:45.730477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.128 [2024-07-13 13:48:45.734618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.128 [2024-07-13 13:48:45.743691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.128 [2024-07-13 13:48:45.744252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.128 [2024-07-13 13:48:45.744325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.128 [2024-07-13 13:48:45.744352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.128 [2024-07-13 13:48:45.744637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.128 [2024-07-13 13:48:45.744942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.128 [2024-07-13 13:48:45.744984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.128 [2024-07-13 13:48:45.745006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.128 [2024-07-13 13:48:45.749180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.128 [2024-07-13 13:48:45.758247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.128 [2024-07-13 13:48:45.758823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.128 [2024-07-13 13:48:45.758899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.128 [2024-07-13 13:48:45.758927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.128 [2024-07-13 13:48:45.759220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.128 [2024-07-13 13:48:45.759509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.128 [2024-07-13 13:48:45.759542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.128 [2024-07-13 13:48:45.759564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.128 [2024-07-13 13:48:45.763689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.128 [2024-07-13 13:48:45.772733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.128 [2024-07-13 13:48:45.773239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.128 [2024-07-13 13:48:45.773288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.129 [2024-07-13 13:48:45.773314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.129 [2024-07-13 13:48:45.773599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.129 [2024-07-13 13:48:45.773900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.129 [2024-07-13 13:48:45.773933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.129 [2024-07-13 13:48:45.773961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.129 [2024-07-13 13:48:45.778112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.129 [2024-07-13 13:48:45.787377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.129 [2024-07-13 13:48:45.787901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.129 [2024-07-13 13:48:45.787953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.129 [2024-07-13 13:48:45.787979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.129 [2024-07-13 13:48:45.788266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.129 [2024-07-13 13:48:45.788554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.129 [2024-07-13 13:48:45.788587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.129 [2024-07-13 13:48:45.788609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.129 [2024-07-13 13:48:45.792747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.129 [2024-07-13 13:48:45.802020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.129 [2024-07-13 13:48:45.802507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.129 [2024-07-13 13:48:45.802558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.129 [2024-07-13 13:48:45.802584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.129 [2024-07-13 13:48:45.802882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.129 [2024-07-13 13:48:45.803171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.129 [2024-07-13 13:48:45.803203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.129 [2024-07-13 13:48:45.803231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.129 [2024-07-13 13:48:45.807381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.129 [2024-07-13 13:48:45.816634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.129 [2024-07-13 13:48:45.817207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.129 [2024-07-13 13:48:45.817257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.129 [2024-07-13 13:48:45.817283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.129 [2024-07-13 13:48:45.817569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.129 [2024-07-13 13:48:45.817856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.129 [2024-07-13 13:48:45.817899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.129 [2024-07-13 13:48:45.817927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.129 [2024-07-13 13:48:45.822074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.129 [2024-07-13 13:48:45.831369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.129 [2024-07-13 13:48:45.831836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.129 [2024-07-13 13:48:45.831892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.129 [2024-07-13 13:48:45.831928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.129 [2024-07-13 13:48:45.832214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.129 [2024-07-13 13:48:45.832502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.129 [2024-07-13 13:48:45.832534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.129 [2024-07-13 13:48:45.832557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.129 [2024-07-13 13:48:45.836699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.129 [2024-07-13 13:48:45.846012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.129 [2024-07-13 13:48:45.846521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.129 [2024-07-13 13:48:45.846570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.129 [2024-07-13 13:48:45.846596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.129 [2024-07-13 13:48:45.846896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.129 [2024-07-13 13:48:45.847185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.129 [2024-07-13 13:48:45.847217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.129 [2024-07-13 13:48:45.847239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.129 [2024-07-13 13:48:45.851374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.129 [2024-07-13 13:48:45.860646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.129 [2024-07-13 13:48:45.861112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.129 [2024-07-13 13:48:45.861163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.129 [2024-07-13 13:48:45.861190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.129 [2024-07-13 13:48:45.861477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.129 [2024-07-13 13:48:45.861767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.129 [2024-07-13 13:48:45.861800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.129 [2024-07-13 13:48:45.861823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.129 [2024-07-13 13:48:45.865962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.389 [2024-07-13 13:48:45.875492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.389 [2024-07-13 13:48:45.876045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.389 [2024-07-13 13:48:45.876099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.389 [2024-07-13 13:48:45.876127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.389 [2024-07-13 13:48:45.876455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.389 [2024-07-13 13:48:45.876797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.389 [2024-07-13 13:48:45.876832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.389 [2024-07-13 13:48:45.876855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.389 [2024-07-13 13:48:45.881037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.389 [2024-07-13 13:48:45.890066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.389 [2024-07-13 13:48:45.890596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.389 [2024-07-13 13:48:45.890645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.389 [2024-07-13 13:48:45.890672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.389 [2024-07-13 13:48:45.890971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.389 [2024-07-13 13:48:45.891260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.389 [2024-07-13 13:48:45.891291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.389 [2024-07-13 13:48:45.891315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.389 [2024-07-13 13:48:45.895449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.389 [2024-07-13 13:48:45.904698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.389 [2024-07-13 13:48:45.905220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.389 [2024-07-13 13:48:45.905272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.389 [2024-07-13 13:48:45.905299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.389 [2024-07-13 13:48:45.905591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.389 [2024-07-13 13:48:45.905890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.389 [2024-07-13 13:48:45.905923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.389 [2024-07-13 13:48:45.905953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.389 [2024-07-13 13:48:45.910082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.389 [2024-07-13 13:48:45.919320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.389 [2024-07-13 13:48:45.919809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.389 [2024-07-13 13:48:45.919858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.389 [2024-07-13 13:48:45.919896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.389 [2024-07-13 13:48:45.920183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.389 [2024-07-13 13:48:45.920472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.389 [2024-07-13 13:48:45.920503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.390 [2024-07-13 13:48:45.920526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.390 [2024-07-13 13:48:45.924650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.390 [2024-07-13 13:48:45.933895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.390 [2024-07-13 13:48:45.934420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.390 [2024-07-13 13:48:45.934480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.390 [2024-07-13 13:48:45.934507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.390 [2024-07-13 13:48:45.934794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.390 [2024-07-13 13:48:45.935096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.390 [2024-07-13 13:48:45.935128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.390 [2024-07-13 13:48:45.935161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.390 [2024-07-13 13:48:45.939280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.390 [2024-07-13 13:48:45.948528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.390 [2024-07-13 13:48:45.949052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.390 [2024-07-13 13:48:45.949102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.390 [2024-07-13 13:48:45.949136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.390 [2024-07-13 13:48:45.949421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.390 [2024-07-13 13:48:45.949710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.390 [2024-07-13 13:48:45.949747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.390 [2024-07-13 13:48:45.949771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.390 [2024-07-13 13:48:45.953921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.390 [2024-07-13 13:48:45.963200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.390 [2024-07-13 13:48:45.963695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.390 [2024-07-13 13:48:45.963743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.390 [2024-07-13 13:48:45.963770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.390 [2024-07-13 13:48:45.964068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.390 [2024-07-13 13:48:45.964355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.390 [2024-07-13 13:48:45.964387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.390 [2024-07-13 13:48:45.964410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.390 [2024-07-13 13:48:45.968537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.390 [2024-07-13 13:48:45.977786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.390 [2024-07-13 13:48:45.978337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.390 [2024-07-13 13:48:45.978405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.390 [2024-07-13 13:48:45.978431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.390 [2024-07-13 13:48:45.978715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.390 [2024-07-13 13:48:45.979014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.390 [2024-07-13 13:48:45.979047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.390 [2024-07-13 13:48:45.979078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 454917 Killed "${NVMF_APP[@]}" "$@" 00:37:11.390 13:48:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:11.390 13:48:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:11.390 13:48:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:11.390 13:48:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:11.390 13:48:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:11.390 [2024-07-13 13:48:45.983200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.390 13:48:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=456132 00:37:11.390 13:48:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:11.390 13:48:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 456132 00:37:11.390 13:48:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 456132 ']' 00:37:11.390 13:48:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.390 13:48:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:11.390 13:48:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.390 13:48:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:11.390 13:48:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:11.390 [2024-07-13 13:48:45.992208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.390 [2024-07-13 13:48:45.992699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.390 [2024-07-13 13:48:45.992746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.390 [2024-07-13 13:48:45.992772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.390 [2024-07-13 13:48:45.993070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.390 [2024-07-13 13:48:45.993358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.390 [2024-07-13 13:48:45.993390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.390 [2024-07-13 13:48:45.993413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.390 [2024-07-13 13:48:45.997564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.390 [2024-07-13 13:48:46.006872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.390 [2024-07-13 13:48:46.007400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.390 [2024-07-13 13:48:46.007451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.390 [2024-07-13 13:48:46.007477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.390 [2024-07-13 13:48:46.007765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.390 [2024-07-13 13:48:46.008068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.390 [2024-07-13 13:48:46.008101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.390 [2024-07-13 13:48:46.008131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.390 [2024-07-13 13:48:46.012305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.390 [2024-07-13 13:48:46.021375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.390 [2024-07-13 13:48:46.021885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.390 [2024-07-13 13:48:46.021926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.390 [2024-07-13 13:48:46.021954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.390 [2024-07-13 13:48:46.022241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.390 [2024-07-13 13:48:46.022531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.390 [2024-07-13 13:48:46.022563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.390 [2024-07-13 13:48:46.022586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.390 [2024-07-13 13:48:46.026742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.390 [2024-07-13 13:48:46.035826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.390 [2024-07-13 13:48:46.036342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.390 [2024-07-13 13:48:46.036392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.390 [2024-07-13 13:48:46.036419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.390 [2024-07-13 13:48:46.036705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.390 [2024-07-13 13:48:46.037007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.390 [2024-07-13 13:48:46.037039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.390 [2024-07-13 13:48:46.037069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.390 [2024-07-13 13:48:46.041233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.390 [2024-07-13 13:48:46.050343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.390 [2024-07-13 13:48:46.050882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.390 [2024-07-13 13:48:46.050934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.390 [2024-07-13 13:48:46.050961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.390 [2024-07-13 13:48:46.051251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.390 [2024-07-13 13:48:46.051543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.390 [2024-07-13 13:48:46.051574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.390 [2024-07-13 13:48:46.051597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.390 [2024-07-13 13:48:46.056002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.390 [2024-07-13 13:48:46.064885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.390 [2024-07-13 13:48:46.065383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.391 [2024-07-13 13:48:46.065436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.391 [2024-07-13 13:48:46.065462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.391 [2024-07-13 13:48:46.065752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.391 [2024-07-13 13:48:46.066052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.391 [2024-07-13 13:48:46.066085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.391 [2024-07-13 13:48:46.066107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.391 [2024-07-13 13:48:46.067173] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:11.391 [2024-07-13 13:48:46.067311] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:11.391 [2024-07-13 13:48:46.070340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.391 [2024-07-13 13:48:46.079573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.391 [2024-07-13 13:48:46.080089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.391 [2024-07-13 13:48:46.080132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.391 [2024-07-13 13:48:46.080164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.391 [2024-07-13 13:48:46.080451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.391 [2024-07-13 13:48:46.080743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.391 [2024-07-13 13:48:46.080775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.391 [2024-07-13 13:48:46.080798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.391 [2024-07-13 13:48:46.085023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.391 [2024-07-13 13:48:46.094217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.391 [2024-07-13 13:48:46.094750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.391 [2024-07-13 13:48:46.094801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.391 [2024-07-13 13:48:46.094827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.391 [2024-07-13 13:48:46.095131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.391 [2024-07-13 13:48:46.095423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.391 [2024-07-13 13:48:46.095455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.391 [2024-07-13 13:48:46.095479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.391 [2024-07-13 13:48:46.099665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.391 [2024-07-13 13:48:46.108873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.391 [2024-07-13 13:48:46.109407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.391 [2024-07-13 13:48:46.109458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.391 [2024-07-13 13:48:46.109484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.391 [2024-07-13 13:48:46.109773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.391 [2024-07-13 13:48:46.110083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.391 [2024-07-13 13:48:46.110116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.391 [2024-07-13 13:48:46.110144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.391 [2024-07-13 13:48:46.114345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.391 [2024-07-13 13:48:46.123461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.391 [2024-07-13 13:48:46.123973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.391 [2024-07-13 13:48:46.124025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.391 [2024-07-13 13:48:46.124051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.391 [2024-07-13 13:48:46.124345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.391 [2024-07-13 13:48:46.124637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.391 [2024-07-13 13:48:46.124669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.391 [2024-07-13 13:48:46.124692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.391 [2024-07-13 13:48:46.128967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.651 [2024-07-13 13:48:46.138372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.651 [2024-07-13 13:48:46.138954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.651 [2024-07-13 13:48:46.139008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.651 [2024-07-13 13:48:46.139036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.651 [2024-07-13 13:48:46.139370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.651 [2024-07-13 13:48:46.139699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.651 [2024-07-13 13:48:46.139734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.651 [2024-07-13 13:48:46.139757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.651 [2024-07-13 13:48:46.143937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.651 EAL: No free 2048 kB hugepages reported on node 1 00:37:11.651 [2024-07-13 13:48:46.153027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.651 [2024-07-13 13:48:46.153537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.651 [2024-07-13 13:48:46.153589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.651 [2024-07-13 13:48:46.153616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.651 [2024-07-13 13:48:46.153913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.651 [2024-07-13 13:48:46.154203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.651 [2024-07-13 13:48:46.154236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.651 [2024-07-13 13:48:46.154259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.651 [2024-07-13 13:48:46.158441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.651 [2024-07-13 13:48:46.167618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.651 [2024-07-13 13:48:46.168146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.651 [2024-07-13 13:48:46.168191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.651 [2024-07-13 13:48:46.168218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.651 [2024-07-13 13:48:46.168505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.651 [2024-07-13 13:48:46.168796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.651 [2024-07-13 13:48:46.168834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.651 [2024-07-13 13:48:46.168875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.651 [2024-07-13 13:48:46.173086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.651 [2024-07-13 13:48:46.182255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.651 [2024-07-13 13:48:46.182766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.651 [2024-07-13 13:48:46.182807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.651 [2024-07-13 13:48:46.182833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.651 [2024-07-13 13:48:46.183132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.651 [2024-07-13 13:48:46.183423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.651 [2024-07-13 13:48:46.183454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.651 [2024-07-13 13:48:46.183477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.652 [2024-07-13 13:48:46.187654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.652 [2024-07-13 13:48:46.196799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.652 [2024-07-13 13:48:46.197301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.652 [2024-07-13 13:48:46.197343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.652 [2024-07-13 13:48:46.197370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.652 [2024-07-13 13:48:46.197659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.652 [2024-07-13 13:48:46.197966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.652 [2024-07-13 13:48:46.197998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.652 [2024-07-13 13:48:46.198020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.652 [2024-07-13 13:48:46.202210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.652 [2024-07-13 13:48:46.211352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.652 [2024-07-13 13:48:46.211835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.652 [2024-07-13 13:48:46.211882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.652 [2024-07-13 13:48:46.211909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.652 [2024-07-13 13:48:46.212198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.652 [2024-07-13 13:48:46.212492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.652 [2024-07-13 13:48:46.212523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.652 [2024-07-13 13:48:46.212547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.652 [2024-07-13 13:48:46.216755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.652 [2024-07-13 13:48:46.222579] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:11.652 [2024-07-13 13:48:46.225913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.652 [2024-07-13 13:48:46.226417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.652 [2024-07-13 13:48:46.226458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.652 [2024-07-13 13:48:46.226485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.652 [2024-07-13 13:48:46.226772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.652 [2024-07-13 13:48:46.227075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.652 [2024-07-13 13:48:46.227107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.652 [2024-07-13 13:48:46.227129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.652 [2024-07-13 13:48:46.231349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.652 [2024-07-13 13:48:46.240656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.652 [2024-07-13 13:48:46.241331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.652 [2024-07-13 13:48:46.241383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.652 [2024-07-13 13:48:46.241414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.652 [2024-07-13 13:48:46.241712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.652 [2024-07-13 13:48:46.242021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.652 [2024-07-13 13:48:46.242054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.652 [2024-07-13 13:48:46.242080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.652 [2024-07-13 13:48:46.246310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.652 [2024-07-13 13:48:46.255283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.652 [2024-07-13 13:48:46.255771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.652 [2024-07-13 13:48:46.255812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.652 [2024-07-13 13:48:46.255838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.652 [2024-07-13 13:48:46.256147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.652 [2024-07-13 13:48:46.256450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.652 [2024-07-13 13:48:46.256481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.652 [2024-07-13 13:48:46.256503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.652 [2024-07-13 13:48:46.260753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.652 [2024-07-13 13:48:46.269874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.652 [2024-07-13 13:48:46.270395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.652 [2024-07-13 13:48:46.270435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.652 [2024-07-13 13:48:46.270468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.652 [2024-07-13 13:48:46.270765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.652 [2024-07-13 13:48:46.271085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.652 [2024-07-13 13:48:46.271117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.652 [2024-07-13 13:48:46.271140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.652 [2024-07-13 13:48:46.275403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.652 [2024-07-13 13:48:46.284418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.652 [2024-07-13 13:48:46.284945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.652 [2024-07-13 13:48:46.284987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.652 [2024-07-13 13:48:46.285013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.652 [2024-07-13 13:48:46.285303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.652 [2024-07-13 13:48:46.285597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.652 [2024-07-13 13:48:46.285628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.652 [2024-07-13 13:48:46.285650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.652 [2024-07-13 13:48:46.289854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.652 [2024-07-13 13:48:46.299030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.652 [2024-07-13 13:48:46.299546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.652 [2024-07-13 13:48:46.299587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.652 [2024-07-13 13:48:46.299613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.652 [2024-07-13 13:48:46.299934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.652 [2024-07-13 13:48:46.300251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.652 [2024-07-13 13:48:46.300283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.652 [2024-07-13 13:48:46.300305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.652 [2024-07-13 13:48:46.304487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.652 [2024-07-13 13:48:46.313628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.652 [2024-07-13 13:48:46.314158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.652 [2024-07-13 13:48:46.314199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.652 [2024-07-13 13:48:46.314225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.652 [2024-07-13 13:48:46.314515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.652 [2024-07-13 13:48:46.314813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.652 [2024-07-13 13:48:46.314844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.652 [2024-07-13 13:48:46.314874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.652 [2024-07-13 13:48:46.319092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.652 [2024-07-13 13:48:46.328302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.652 [2024-07-13 13:48:46.328811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.652 [2024-07-13 13:48:46.328852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.652 [2024-07-13 13:48:46.328887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.652 [2024-07-13 13:48:46.329179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.652 [2024-07-13 13:48:46.329473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.652 [2024-07-13 13:48:46.329504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.652 [2024-07-13 13:48:46.329526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.652 [2024-07-13 13:48:46.333772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.652 [2024-07-13 13:48:46.343034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.652 [2024-07-13 13:48:46.343550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.652 [2024-07-13 13:48:46.343591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.652 [2024-07-13 13:48:46.343617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.652 [2024-07-13 13:48:46.343914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.653 [2024-07-13 13:48:46.344208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.653 [2024-07-13 13:48:46.344256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.653 [2024-07-13 13:48:46.344278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.653 [2024-07-13 13:48:46.348533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.653 [2024-07-13 13:48:46.357853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.653 [2024-07-13 13:48:46.358411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.653 [2024-07-13 13:48:46.358454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.653 [2024-07-13 13:48:46.358481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.653 [2024-07-13 13:48:46.358775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.653 [2024-07-13 13:48:46.359084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.653 [2024-07-13 13:48:46.359116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.653 [2024-07-13 13:48:46.359139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.653 [2024-07-13 13:48:46.363412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.653 [2024-07-13 13:48:46.372509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.653 [2024-07-13 13:48:46.373161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.653 [2024-07-13 13:48:46.373211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.653 [2024-07-13 13:48:46.373241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.653 [2024-07-13 13:48:46.373545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.653 [2024-07-13 13:48:46.373845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.653 [2024-07-13 13:48:46.373888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.653 [2024-07-13 13:48:46.373915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.653 [2024-07-13 13:48:46.378188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.653 [2024-07-13 13:48:46.387248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.653 [2024-07-13 13:48:46.387765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.653 [2024-07-13 13:48:46.387805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.653 [2024-07-13 13:48:46.387832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.653 [2024-07-13 13:48:46.388136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.653 [2024-07-13 13:48:46.388433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.653 [2024-07-13 13:48:46.388464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.653 [2024-07-13 13:48:46.388487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.653 [2024-07-13 13:48:46.392908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.913 [2024-07-13 13:48:46.402077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.913 [2024-07-13 13:48:46.402588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.913 [2024-07-13 13:48:46.402632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.913 [2024-07-13 13:48:46.402659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.913 [2024-07-13 13:48:46.402967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.913 [2024-07-13 13:48:46.403264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.913 [2024-07-13 13:48:46.403296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.913 [2024-07-13 13:48:46.403318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.913 [2024-07-13 13:48:46.407575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.913 [2024-07-13 13:48:46.416759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.913 [2024-07-13 13:48:46.417272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.913 [2024-07-13 13:48:46.417313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.913 [2024-07-13 13:48:46.417345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.913 [2024-07-13 13:48:46.417638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.913 [2024-07-13 13:48:46.417945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.913 [2024-07-13 13:48:46.417977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.913 [2024-07-13 13:48:46.417999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.913 [2024-07-13 13:48:46.422223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.913 [2024-07-13 13:48:46.431372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.913 [2024-07-13 13:48:46.431898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.913 [2024-07-13 13:48:46.431940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.913 [2024-07-13 13:48:46.431966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.913 [2024-07-13 13:48:46.432258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.913 [2024-07-13 13:48:46.432551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.913 [2024-07-13 13:48:46.432582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.913 [2024-07-13 13:48:46.432604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.913 [2024-07-13 13:48:46.436801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.913 [2024-07-13 13:48:46.446012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.913 [2024-07-13 13:48:46.446523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.913 [2024-07-13 13:48:46.446565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.913 [2024-07-13 13:48:46.446591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.913 [2024-07-13 13:48:46.446888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.913 [2024-07-13 13:48:46.447181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.913 [2024-07-13 13:48:46.447212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.913 [2024-07-13 13:48:46.447234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.913 [2024-07-13 13:48:46.451450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.913 [2024-07-13 13:48:46.460634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.913 [2024-07-13 13:48:46.461138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.913 [2024-07-13 13:48:46.461180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.913 [2024-07-13 13:48:46.461206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.913 [2024-07-13 13:48:46.461497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.913 [2024-07-13 13:48:46.461797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.913 [2024-07-13 13:48:46.461829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.913 [2024-07-13 13:48:46.461851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.913 [2024-07-13 13:48:46.466090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.913 [2024-07-13 13:48:46.475315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.913 [2024-07-13 13:48:46.475836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.913 [2024-07-13 13:48:46.475883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.913 [2024-07-13 13:48:46.475920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.913 [2024-07-13 13:48:46.476213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.913 [2024-07-13 13:48:46.476506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.913 [2024-07-13 13:48:46.476537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.913 [2024-07-13 13:48:46.476559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.913 [2024-07-13 13:48:46.480808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.913 [2024-07-13 13:48:46.488948] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:11.913 [2024-07-13 13:48:46.488995] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:11.913 [2024-07-13 13:48:46.489030] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:11.913 [2024-07-13 13:48:46.489052] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:11.913 [2024-07-13 13:48:46.489073] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:11.913 [2024-07-13 13:48:46.489265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:11.913 [2024-07-13 13:48:46.489315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:11.913 [2024-07-13 13:48:46.489326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:37:11.913 [2024-07-13 13:48:46.490076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.913 [2024-07-13 13:48:46.490596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.913 [2024-07-13 13:48:46.490637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.913 [2024-07-13 13:48:46.490664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.913 [2024-07-13 13:48:46.490967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.913 [2024-07-13 13:48:46.491269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.913 [2024-07-13 13:48:46.491301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.913 [2024-07-13 13:48:46.491323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.913 [2024-07-13 13:48:46.495676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.913 [2024-07-13 13:48:46.504910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.913 [2024-07-13 13:48:46.505623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.913 [2024-07-13 13:48:46.505686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.913 [2024-07-13 13:48:46.505718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.913 [2024-07-13 13:48:46.506043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.913 [2024-07-13 13:48:46.506356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.913 [2024-07-13 13:48:46.506388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.913 [2024-07-13 13:48:46.506413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.913 [2024-07-13 13:48:46.510769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.913 [2024-07-13 13:48:46.519607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.913 [2024-07-13 13:48:46.520196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.913 [2024-07-13 13:48:46.520238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.913 [2024-07-13 13:48:46.520264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.913 [2024-07-13 13:48:46.520558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.913 [2024-07-13 13:48:46.520853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.913 [2024-07-13 13:48:46.520892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.913 [2024-07-13 13:48:46.520923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.913 [2024-07-13 13:48:46.525197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.913 [2024-07-13 13:48:46.534349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.913 [2024-07-13 13:48:46.534840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.913 [2024-07-13 13:48:46.534887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.913 [2024-07-13 13:48:46.534925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.913 [2024-07-13 13:48:46.535235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.914 [2024-07-13 13:48:46.535530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.914 [2024-07-13 13:48:46.535562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.914 [2024-07-13 13:48:46.535585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.914 [2024-07-13 13:48:46.539888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.914 [2024-07-13 13:48:46.549010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.914 [2024-07-13 13:48:46.549503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.914 [2024-07-13 13:48:46.549545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.914 [2024-07-13 13:48:46.549571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.914 [2024-07-13 13:48:46.549862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.914 [2024-07-13 13:48:46.550177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.914 [2024-07-13 13:48:46.550210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.914 [2024-07-13 13:48:46.550232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.914 [2024-07-13 13:48:46.554481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.914 [2024-07-13 13:48:46.563664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.914 [2024-07-13 13:48:46.564164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.914 [2024-07-13 13:48:46.564205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.914 [2024-07-13 13:48:46.564231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.914 [2024-07-13 13:48:46.564520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.914 [2024-07-13 13:48:46.564812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.914 [2024-07-13 13:48:46.564842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.914 [2024-07-13 13:48:46.564874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.914 [2024-07-13 13:48:46.569142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.914 [2024-07-13 13:48:46.578263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.914 [2024-07-13 13:48:46.578920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.914 [2024-07-13 13:48:46.578974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.914 [2024-07-13 13:48:46.579005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.914 [2024-07-13 13:48:46.579313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.914 [2024-07-13 13:48:46.579615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.914 [2024-07-13 13:48:46.579648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.914 [2024-07-13 13:48:46.579674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.914 [2024-07-13 13:48:46.584051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.914 [2024-07-13 13:48:46.593186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.914 [2024-07-13 13:48:46.593935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.914 [2024-07-13 13:48:46.593988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.914 [2024-07-13 13:48:46.594019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.914 [2024-07-13 13:48:46.594332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.914 [2024-07-13 13:48:46.594638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.914 [2024-07-13 13:48:46.594671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.914 [2024-07-13 13:48:46.594709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.914 [2024-07-13 13:48:46.599081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.914 [2024-07-13 13:48:46.607992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.914 [2024-07-13 13:48:46.608592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.914 [2024-07-13 13:48:46.608642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.914 [2024-07-13 13:48:46.608673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.914 [2024-07-13 13:48:46.608998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.914 [2024-07-13 13:48:46.609304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.914 [2024-07-13 13:48:46.609337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.914 [2024-07-13 13:48:46.609363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.914 [2024-07-13 13:48:46.613688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.914 [2024-07-13 13:48:46.622846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.914 [2024-07-13 13:48:46.623381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.914 [2024-07-13 13:48:46.623421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.914 [2024-07-13 13:48:46.623447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.914 [2024-07-13 13:48:46.623741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.914 [2024-07-13 13:48:46.624045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.914 [2024-07-13 13:48:46.624077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.914 [2024-07-13 13:48:46.624099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.914 [2024-07-13 13:48:46.628352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.914 [2024-07-13 13:48:46.637646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.914 [2024-07-13 13:48:46.638160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.914 [2024-07-13 13:48:46.638201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.914 [2024-07-13 13:48:46.638228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.914 [2024-07-13 13:48:46.638521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.914 [2024-07-13 13:48:46.638817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.914 [2024-07-13 13:48:46.638848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.914 [2024-07-13 13:48:46.638880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:11.914 [2024-07-13 13:48:46.643132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.914 [2024-07-13 13:48:46.652428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:11.914 [2024-07-13 13:48:46.652921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.914 [2024-07-13 13:48:46.652970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:11.914 [2024-07-13 13:48:46.652997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:11.914 [2024-07-13 13:48:46.653321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:11.914 [2024-07-13 13:48:46.653637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:11.914 [2024-07-13 13:48:46.653670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:11.914 [2024-07-13 13:48:46.653693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.175 [2024-07-13 13:48:46.658284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.175 [2024-07-13 13:48:46.666981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.175 [2024-07-13 13:48:46.667525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.175 [2024-07-13 13:48:46.667568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.175 [2024-07-13 13:48:46.667595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.175 [2024-07-13 13:48:46.667897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.175 [2024-07-13 13:48:46.668191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.175 [2024-07-13 13:48:46.668221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.175 [2024-07-13 13:48:46.668244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.175 [2024-07-13 13:48:46.672442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.175 [2024-07-13 13:48:46.681580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.175 [2024-07-13 13:48:46.682069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.175 [2024-07-13 13:48:46.682110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.175 [2024-07-13 13:48:46.682136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.175 [2024-07-13 13:48:46.682424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.175 [2024-07-13 13:48:46.682716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.175 [2024-07-13 13:48:46.682747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.175 [2024-07-13 13:48:46.682769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.175 [2024-07-13 13:48:46.687127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.175 [2024-07-13 13:48:46.696251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.175 [2024-07-13 13:48:46.696731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.175 [2024-07-13 13:48:46.696773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.175 [2024-07-13 13:48:46.696799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.175 [2024-07-13 13:48:46.697103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.175 [2024-07-13 13:48:46.697392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.175 [2024-07-13 13:48:46.697423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.175 [2024-07-13 13:48:46.697445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.175 [2024-07-13 13:48:46.701619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.175 [2024-07-13 13:48:46.710732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.175 [2024-07-13 13:48:46.711248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.175 [2024-07-13 13:48:46.711290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.175 [2024-07-13 13:48:46.711315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.175 [2024-07-13 13:48:46.711605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.175 [2024-07-13 13:48:46.711908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.175 [2024-07-13 13:48:46.711940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.175 [2024-07-13 13:48:46.711962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.175 [2024-07-13 13:48:46.716143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.175 [2024-07-13 13:48:46.725281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.175 [2024-07-13 13:48:46.725789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.175 [2024-07-13 13:48:46.725830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.175 [2024-07-13 13:48:46.725856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.175 [2024-07-13 13:48:46.726166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.175 [2024-07-13 13:48:46.726459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.175 [2024-07-13 13:48:46.726491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.175 [2024-07-13 13:48:46.726512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.175 [2024-07-13 13:48:46.730748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.175 [2024-07-13 13:48:46.740131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.175 [2024-07-13 13:48:46.740854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.175 [2024-07-13 13:48:46.740926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.175 [2024-07-13 13:48:46.740959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.175 [2024-07-13 13:48:46.741266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.175 [2024-07-13 13:48:46.741567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.175 [2024-07-13 13:48:46.741600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.175 [2024-07-13 13:48:46.741636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.175 [2024-07-13 13:48:46.745950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.175 [2024-07-13 13:48:46.754852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.175 [2024-07-13 13:48:46.755523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.175 [2024-07-13 13:48:46.755572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.175 [2024-07-13 13:48:46.755603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.175 [2024-07-13 13:48:46.755914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.175 [2024-07-13 13:48:46.756213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.175 [2024-07-13 13:48:46.756246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.175 [2024-07-13 13:48:46.756272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.175 [2024-07-13 13:48:46.760511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.175 [2024-07-13 13:48:46.769542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.175 [2024-07-13 13:48:46.770086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.175 [2024-07-13 13:48:46.770128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.175 [2024-07-13 13:48:46.770154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.175 [2024-07-13 13:48:46.770445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.175 [2024-07-13 13:48:46.770739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.175 [2024-07-13 13:48:46.770770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.175 [2024-07-13 13:48:46.770792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.175 [2024-07-13 13:48:46.775053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.175 [2024-07-13 13:48:46.784300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.175 [2024-07-13 13:48:46.784786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.175 [2024-07-13 13:48:46.784828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.175 [2024-07-13 13:48:46.784853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.175 [2024-07-13 13:48:46.785153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.175 [2024-07-13 13:48:46.785449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.175 [2024-07-13 13:48:46.785480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.175 [2024-07-13 13:48:46.785502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.175 [2024-07-13 13:48:46.789740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.175 [2024-07-13 13:48:46.798995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.175 [2024-07-13 13:48:46.799516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.175 [2024-07-13 13:48:46.799558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.175 [2024-07-13 13:48:46.799583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.175 [2024-07-13 13:48:46.799882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.175 [2024-07-13 13:48:46.800174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.175 [2024-07-13 13:48:46.800206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.175 [2024-07-13 13:48:46.800228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.175 [2024-07-13 13:48:46.804446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.175 [2024-07-13 13:48:46.813655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.175 [2024-07-13 13:48:46.814159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.176 [2024-07-13 13:48:46.814200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.176 [2024-07-13 13:48:46.814227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.176 [2024-07-13 13:48:46.814517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.176 [2024-07-13 13:48:46.814811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.176 [2024-07-13 13:48:46.814842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.176 [2024-07-13 13:48:46.814863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.176 [2024-07-13 13:48:46.819081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.176 [2024-07-13 13:48:46.828182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.176 [2024-07-13 13:48:46.828640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.176 [2024-07-13 13:48:46.828681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.176 [2024-07-13 13:48:46.828706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.176 [2024-07-13 13:48:46.829004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.176 [2024-07-13 13:48:46.829295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.176 [2024-07-13 13:48:46.829327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.176 [2024-07-13 13:48:46.829348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.176 [2024-07-13 13:48:46.833519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.176 [2024-07-13 13:48:46.842922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.176 [2024-07-13 13:48:46.843450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.176 [2024-07-13 13:48:46.843492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.176 [2024-07-13 13:48:46.843518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.176 [2024-07-13 13:48:46.843817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.176 [2024-07-13 13:48:46.844120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.176 [2024-07-13 13:48:46.844152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.176 [2024-07-13 13:48:46.844174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.176 [2024-07-13 13:48:46.848405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.176 [2024-07-13 13:48:46.857649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.176 [2024-07-13 13:48:46.858165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.176 [2024-07-13 13:48:46.858208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.176 [2024-07-13 13:48:46.858235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.176 [2024-07-13 13:48:46.858531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.176 [2024-07-13 13:48:46.858828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.176 [2024-07-13 13:48:46.858859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.176 [2024-07-13 13:48:46.858895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.176 [2024-07-13 13:48:46.863154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.176 [2024-07-13 13:48:46.872457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.176 [2024-07-13 13:48:46.872965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.176 [2024-07-13 13:48:46.873006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.176 [2024-07-13 13:48:46.873032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.176 [2024-07-13 13:48:46.873325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.176 [2024-07-13 13:48:46.873619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.176 [2024-07-13 13:48:46.873650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.176 [2024-07-13 13:48:46.873672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.176 [2024-07-13 13:48:46.877968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.176 [2024-07-13 13:48:46.887265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.176 [2024-07-13 13:48:46.887776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.176 [2024-07-13 13:48:46.887816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.176 [2024-07-13 13:48:46.887842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.176 [2024-07-13 13:48:46.888143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.176 [2024-07-13 13:48:46.888438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.176 [2024-07-13 13:48:46.888469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.176 [2024-07-13 13:48:46.888497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.176 [2024-07-13 13:48:46.892718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.176 [2024-07-13 13:48:46.901948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.176 [2024-07-13 13:48:46.902465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.176 [2024-07-13 13:48:46.902506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.176 [2024-07-13 13:48:46.902532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.176 [2024-07-13 13:48:46.902821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.176 [2024-07-13 13:48:46.903123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.176 [2024-07-13 13:48:46.903155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.176 [2024-07-13 13:48:46.903177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.176 [2024-07-13 13:48:46.907382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.176 [2024-07-13 13:48:46.916815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.176 [2024-07-13 13:48:46.917328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.176 [2024-07-13 13:48:46.917373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.176 [2024-07-13 13:48:46.917400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.176 [2024-07-13 13:48:46.917688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.176 [2024-07-13 13:48:46.917992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.176 [2024-07-13 13:48:46.918039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.176 [2024-07-13 13:48:46.918081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.436 [2024-07-13 13:48:46.922444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.436 [2024-07-13 13:48:46.931515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.436 [2024-07-13 13:48:46.932030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.436 [2024-07-13 13:48:46.932074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.436 [2024-07-13 13:48:46.932101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.436 [2024-07-13 13:48:46.932389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.436 [2024-07-13 13:48:46.932681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.436 [2024-07-13 13:48:46.932712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.436 [2024-07-13 13:48:46.932735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.436 [2024-07-13 13:48:46.936904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.436 [2024-07-13 13:48:46.945994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.436 [2024-07-13 13:48:46.946514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.436 [2024-07-13 13:48:46.946555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.436 [2024-07-13 13:48:46.946582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.436 [2024-07-13 13:48:46.946879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.436 [2024-07-13 13:48:46.947169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.436 [2024-07-13 13:48:46.947200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.436 [2024-07-13 13:48:46.947222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.436 [2024-07-13 13:48:46.951402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.436 [2024-07-13 13:48:46.960500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.436 [2024-07-13 13:48:46.960983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.436 [2024-07-13 13:48:46.961025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.436 [2024-07-13 13:48:46.961052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.436 [2024-07-13 13:48:46.961340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.436 [2024-07-13 13:48:46.961630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.436 [2024-07-13 13:48:46.961661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.436 [2024-07-13 13:48:46.961684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.436 [2024-07-13 13:48:46.965845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.436 [2024-07-13 13:48:46.975177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.436 [2024-07-13 13:48:46.975647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.436 [2024-07-13 13:48:46.975689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.436 [2024-07-13 13:48:46.975731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.436 [2024-07-13 13:48:46.976032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.436 [2024-07-13 13:48:46.976322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.437 [2024-07-13 13:48:46.976353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.437 [2024-07-13 13:48:46.976375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.437 [2024-07-13 13:48:46.980537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.437 [2024-07-13 13:48:46.989857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.437 [2024-07-13 13:48:46.990342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.437 [2024-07-13 13:48:46.990383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.437 [2024-07-13 13:48:46.990409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.437 [2024-07-13 13:48:46.990700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.437 [2024-07-13 13:48:46.991001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.437 [2024-07-13 13:48:46.991033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.437 [2024-07-13 13:48:46.991055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.437 [2024-07-13 13:48:46.995221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.437 [2024-07-13 13:48:47.004571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.437 [2024-07-13 13:48:47.005058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.437 [2024-07-13 13:48:47.005099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.437 [2024-07-13 13:48:47.005125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.437 [2024-07-13 13:48:47.005413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.437 [2024-07-13 13:48:47.005703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.437 [2024-07-13 13:48:47.005734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.437 [2024-07-13 13:48:47.005755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.437 [2024-07-13 13:48:47.009933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.437 [2024-07-13 13:48:47.019252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.437 [2024-07-13 13:48:47.019775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.437 [2024-07-13 13:48:47.019816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.437 [2024-07-13 13:48:47.019843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.437 [2024-07-13 13:48:47.020138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.437 [2024-07-13 13:48:47.020428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.437 [2024-07-13 13:48:47.020459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.437 [2024-07-13 13:48:47.020481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.437 [2024-07-13 13:48:47.024642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.437 [2024-07-13 13:48:47.033749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.437 [2024-07-13 13:48:47.034261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.437 [2024-07-13 13:48:47.034302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.437 [2024-07-13 13:48:47.034328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.437 [2024-07-13 13:48:47.034614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.437 [2024-07-13 13:48:47.034916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.437 [2024-07-13 13:48:47.034947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.437 [2024-07-13 13:48:47.034975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.437 [2024-07-13 13:48:47.039149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.437 [2024-07-13 13:48:47.048002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.437 [2024-07-13 13:48:47.048478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.437 [2024-07-13 13:48:47.048515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.437 [2024-07-13 13:48:47.048539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.437 [2024-07-13 13:48:47.048800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.437 [2024-07-13 13:48:47.049072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.437 [2024-07-13 13:48:47.049101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.437 [2024-07-13 13:48:47.049121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.437 [2024-07-13 13:48:47.052941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.437 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:12.437 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:37:12.437 13:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:12.437 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:12.437 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.437 [2024-07-13 13:48:47.062247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.437 [2024-07-13 13:48:47.062746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.437 [2024-07-13 13:48:47.062783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.437 [2024-07-13 13:48:47.062807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.437 [2024-07-13 13:48:47.063085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.437 [2024-07-13 13:48:47.063368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.437 [2024-07-13 13:48:47.063395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.437 [2024-07-13 13:48:47.063415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.437 [2024-07-13 13:48:47.067221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.437 [2024-07-13 13:48:47.076544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.437 [2024-07-13 13:48:47.076985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.437 [2024-07-13 13:48:47.077023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.437 [2024-07-13 13:48:47.077047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.437 [2024-07-13 13:48:47.077322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.437 [2024-07-13 13:48:47.077586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.437 [2024-07-13 13:48:47.077613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.437 [2024-07-13 13:48:47.077638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.437 [2024-07-13 13:48:47.081456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.437 13:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:12.437 13:48:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:12.437 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.437 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.437 [2024-07-13 13:48:47.087793] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:12.437 [2024-07-13 13:48:47.090710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.437 [2024-07-13 13:48:47.091159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.437 [2024-07-13 13:48:47.091196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.437 [2024-07-13 13:48:47.091219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.437 [2024-07-13 13:48:47.091492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.437 [2024-07-13 13:48:47.091747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.437 [2024-07-13 13:48:47.091774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.437 [2024-07-13 13:48:47.091794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.437 [2024-07-13 13:48:47.095588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.437 [2024-07-13 13:48:47.104713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.437 [2024-07-13 13:48:47.105213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.437 [2024-07-13 13:48:47.105251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.437 [2024-07-13 13:48:47.105274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.437 [2024-07-13 13:48:47.105557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.437 [2024-07-13 13:48:47.105802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.437 [2024-07-13 13:48:47.105829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.437 [2024-07-13 13:48:47.105862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.437 [2024-07-13 13:48:47.109649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.437 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.437 13:48:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:12.437 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.437 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.437 [2024-07-13 13:48:47.119171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.437 [2024-07-13 13:48:47.119761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.437 [2024-07-13 13:48:47.119805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.438 [2024-07-13 13:48:47.119832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.438 [2024-07-13 13:48:47.120122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.438 [2024-07-13 13:48:47.120407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.438 [2024-07-13 13:48:47.120436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.438 [2024-07-13 13:48:47.120460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.438 [2024-07-13 13:48:47.124462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.438 [2024-07-13 13:48:47.133556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.438 [2024-07-13 13:48:47.134247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.438 [2024-07-13 13:48:47.134295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.438 [2024-07-13 13:48:47.134324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.438 [2024-07-13 13:48:47.134618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.438 [2024-07-13 13:48:47.134911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.438 [2024-07-13 13:48:47.134943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.438 [2024-07-13 13:48:47.134967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.438 [2024-07-13 13:48:47.138838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.438 [2024-07-13 13:48:47.148021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.438 [2024-07-13 13:48:47.148467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.438 [2024-07-13 13:48:47.148504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.438 [2024-07-13 13:48:47.148528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.438 [2024-07-13 13:48:47.148809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.438 [2024-07-13 13:48:47.149103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.438 [2024-07-13 13:48:47.149133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.438 [2024-07-13 13:48:47.149153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.438 [2024-07-13 13:48:47.153117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.438 [2024-07-13 13:48:47.162127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.438 [2024-07-13 13:48:47.162620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.438 [2024-07-13 13:48:47.162657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.438 [2024-07-13 13:48:47.162681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.438 [2024-07-13 13:48:47.162955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.438 [2024-07-13 13:48:47.163243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.438 [2024-07-13 13:48:47.163278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.438 [2024-07-13 13:48:47.163299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.438 [2024-07-13 13:48:47.167054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.438 [2024-07-13 13:48:47.176490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.438 [2024-07-13 13:48:47.176988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.438 [2024-07-13 13:48:47.177028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.438 [2024-07-13 13:48:47.177054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.438 [2024-07-13 13:48:47.177344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.438 [2024-07-13 13:48:47.177618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.438 [2024-07-13 13:48:47.177645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.438 [2024-07-13 13:48:47.177666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.697 [2024-07-13 13:48:47.181981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.697 Malloc0 00:37:12.697 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.697 13:48:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:12.697 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.697 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.697 [2024-07-13 13:48:47.190926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.697 [2024-07-13 13:48:47.191398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.697 [2024-07-13 13:48:47.191437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.697 [2024-07-13 13:48:47.191462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.697 [2024-07-13 13:48:47.191738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.697 [2024-07-13 13:48:47.192030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.697 [2024-07-13 13:48:47.192060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.697 [2024-07-13 13:48:47.192080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.697 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.697 13:48:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:12.697 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.697 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.697 [2024-07-13 13:48:47.195987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.697 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.697 13:48:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:12.697 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.697 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.697 [2024-07-13 13:48:47.205138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.697 [2024-07-13 13:48:47.205603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.697 [2024-07-13 13:48:47.205641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.697 [2024-07-13 13:48:47.205664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:37:12.697 [2024-07-13 13:48:47.205965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.697 [2024-07-13 13:48:47.206243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:12.697 [2024-07-13 13:48:47.206271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:12.697 [2024-07-13 13:48:47.206290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:12.697 [2024-07-13 13:48:47.207562] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:12.697 [2024-07-13 13:48:47.210119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:12.697 13:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.697 13:48:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 455393 00:37:12.697 [2024-07-13 13:48:47.219290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:12.697 [2024-07-13 13:48:47.388778] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:22.702 00:37:22.702 Latency(us) 00:37:22.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:22.702 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:22.702 Verification LBA range: start 0x0 length 0x4000 00:37:22.702 Nvme1n1 : 15.01 4346.28 16.98 9164.81 0.00 9444.39 1171.15 37282.70 00:37:22.702 =================================================================================================================== 00:37:22.702 Total : 4346.28 16.98 9164.81 0.00 9444.39 1171.15 37282.70 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:22.702 rmmod nvme_tcp 00:37:22.702 rmmod nvme_fabrics 00:37:22.702 rmmod nvme_keyring 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 456132 ']' 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 456132 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 456132 ']' 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 456132 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 456132 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 456132' 00:37:22.702 killing process with pid 456132 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 456132 00:37:22.702 13:48:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 456132 00:37:23.637 13:48:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:23.637 13:48:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:23.637 13:48:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:23.637 13:48:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:23.637 13:48:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:23.637 13:48:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:23.637 13:48:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:23.637 13:48:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.171 13:49:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:26.171 00:37:26.171 real 0m26.486s 00:37:26.171 user 1m12.981s 00:37:26.171 sys 0m4.396s 00:37:26.171 13:49:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:26.171 13:49:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:26.171 ************************************ 00:37:26.171 END TEST nvmf_bdevperf 00:37:26.171 ************************************ 00:37:26.171 13:49:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:26.171 13:49:00 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:26.171 13:49:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:26.171 13:49:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:26.171 13:49:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:26.171 ************************************ 00:37:26.171 START TEST nvmf_target_disconnect 00:37:26.171 ************************************ 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:26.171 * Looking for test storage... 00:37:26.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:26.171 13:49:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:26.172 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:26.172 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:26.172 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:26.172 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:26.172 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:26.172 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.172 13:49:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:26.172 13:49:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.172 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:26.172 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:26.172 13:49:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:37:26.172 13:49:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:28.072 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:28.072 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:28.072 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:28.072 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:28.072 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:28.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:28.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:37:28.073 00:37:28.073 --- 10.0.0.2 ping statistics --- 00:37:28.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:28.073 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:28.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:28.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:37:28.073 00:37:28.073 --- 10.0.0.1 ping statistics --- 00:37:28.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:28.073 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:28.073 ************************************ 00:37:28.073 START TEST nvmf_target_disconnect_tc1 00:37:28.073 ************************************ 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:28.073 EAL: No free 2048 kB hugepages reported on node 1 00:37:28.073 [2024-07-13 13:49:02.680185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:28.073 [2024-07-13 13:49:02.680296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2000 with addr=10.0.0.2, port=4420 00:37:28.073 [2024-07-13 13:49:02.680390] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:28.073 [2024-07-13 13:49:02.680420] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:28.073 [2024-07-13 13:49:02.680445] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:37:28.073 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:28.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:28.073 Initializing NVMe Controllers 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:28.073 00:37:28.073 real 0m0.220s 00:37:28.073 user 0m0.092s 00:37:28.073 sys 0m0.126s 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:28.073 ************************************ 00:37:28.073 END TEST nvmf_target_disconnect_tc1 00:37:28.073 ************************************ 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:28.073 ************************************ 00:37:28.073 START TEST nvmf_target_disconnect_tc2 00:37:28.073 ************************************ 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=459540 00:37:28.073 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:28.074 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 459540 00:37:28.074 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 459540 ']' 00:37:28.074 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:28.074 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:28.074 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:28.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:28.074 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:28.074 13:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:28.339 [2024-07-13 13:49:02.860106] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:28.339 [2024-07-13 13:49:02.860282] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:28.339 EAL: No free 2048 kB hugepages reported on node 1 00:37:28.339 [2024-07-13 13:49:02.996840] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:28.601 [2024-07-13 13:49:03.225087] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:28.601 [2024-07-13 13:49:03.225158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:28.601 [2024-07-13 13:49:03.225197] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:28.601 [2024-07-13 13:49:03.225217] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:28.601 [2024-07-13 13:49:03.225238] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:28.601 [2024-07-13 13:49:03.225520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:37:28.601 [2024-07-13 13:49:03.225565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:37:28.601 [2024-07-13 13:49:03.225691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:37:28.601 [2024-07-13 13:49:03.225733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:37:29.165 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:29.165 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:37:29.165 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:29.165 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:29.165 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.165 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:29.165 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:29.165 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.165 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.166 Malloc0 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.166 [2024-07-13 13:49:03.863104] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.166 [2024-07-13 13:49:03.892480] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=459694 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:29.166 13:49:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:29.423 EAL: No free 2048 kB hugepages reported on node 1 00:37:31.334 13:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 459540 00:37:31.334 13:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Write completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Write completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Write completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Write completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Write completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Write completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Write completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Write completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Write completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 [2024-07-13 13:49:05.929313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.334 Read completed with error (sct=0, sc=8) 00:37:31.334 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 [2024-07-13 13:49:05.929960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Write completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 [2024-07-13 13:49:05.930611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.335 starting I/O failed 00:37:31.335 Read completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Read completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Read completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Read completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Write completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Read completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Write completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Read completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Read completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Read completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Read completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Write completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Read completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Write completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Write completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Write completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Write completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Write completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Read completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Read completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Read completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 Write completed with error (sct=0, sc=8) 00:37:31.336 starting I/O failed 00:37:31.336 [2024-07-13 13:49:05.931285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:31.336 [2024-07-13 13:49:05.931536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.336 [2024-07-13 13:49:05.931593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.336 qpair failed and we were unable to recover it. 00:37:31.336 [2024-07-13 13:49:05.931807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.336 [2024-07-13 13:49:05.931841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.336 qpair failed and we were unable to recover it. 00:37:31.336 [2024-07-13 13:49:05.932278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.336 [2024-07-13 13:49:05.932324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.336 qpair failed and we were unable to recover it. 00:37:31.336 [2024-07-13 13:49:05.932513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.336 [2024-07-13 13:49:05.932545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.336 qpair failed and we were unable to recover it. 00:37:31.336 [2024-07-13 13:49:05.932718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.336 [2024-07-13 13:49:05.932769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.336 qpair failed and we were unable to recover it. 00:37:31.336 [2024-07-13 13:49:05.933291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.336 [2024-07-13 13:49:05.933346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.336 qpair failed and we were unable to recover it. 00:37:31.336 [2024-07-13 13:49:05.933616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.336 [2024-07-13 13:49:05.933651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.336 qpair failed and we were unable to recover it. 00:37:31.336 [2024-07-13 13:49:05.933895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.336 [2024-07-13 13:49:05.933944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.336 qpair failed and we were unable to recover it. 00:37:31.336 [2024-07-13 13:49:05.934138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.336 [2024-07-13 13:49:05.934171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.336 qpair failed and we were unable to recover it. 00:37:31.336 [2024-07-13 13:49:05.934351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.336 [2024-07-13 13:49:05.934383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.336 qpair failed and we were unable to recover it. 00:37:31.336 [2024-07-13 13:49:05.934589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.336 [2024-07-13 13:49:05.934621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.336 qpair failed and we were unable to recover it. 00:37:31.336 [2024-07-13 13:49:05.934815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.336 [2024-07-13 13:49:05.934851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.336 qpair failed and we were unable to recover it. 00:37:31.336 [2024-07-13 13:49:05.935037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.336 [2024-07-13 13:49:05.935070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.336 qpair failed and we were unable to recover it. 00:37:31.336 [2024-07-13 13:49:05.935226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.336 [2024-07-13 13:49:05.935257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.336 qpair failed and we were unable to recover it. 00:37:31.336 [2024-07-13 13:49:05.935530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.336 [2024-07-13 13:49:05.935562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.336 qpair failed and we were unable to recover it. 00:37:31.336 [2024-07-13 13:49:05.935757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.336 [2024-07-13 13:49:05.935793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.336 qpair failed and we were unable to recover it. 00:37:31.336 [2024-07-13 13:49:05.935982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.936014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.936155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.936187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.936345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.936394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.936616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.936648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.936817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.936848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.937013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.937049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.937233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.937283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.937442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.937477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.937667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.937702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.937888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.937921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.938075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.938108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.938256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.938288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.938456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.938487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.938640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.938671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.938877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.938910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.939094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.939126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.939269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.939300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.939469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.939501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.939675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.939708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.939883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.939916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.940062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.940094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.940267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.940314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.940505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.940538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.940689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.940720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.940898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.337 [2024-07-13 13:49:05.940930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.337 qpair failed and we were unable to recover it. 00:37:31.337 [2024-07-13 13:49:05.941111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.941143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.941308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.941341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.941519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.941550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.941783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.941814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.942015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.942051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.942288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.942320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.942514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.942546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.942724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.942756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.942929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.942961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.943115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.943147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.943315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.943346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.943493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.943525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.943682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.943714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.943896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.943929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.944078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.944111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.944337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.944385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.944557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.944592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.944813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.944844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.945038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.945069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.945221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.945252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.945428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.945464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.945620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.945651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.945795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.945827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.945973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.946005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.946161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.946192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.338 [2024-07-13 13:49:05.946391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.338 [2024-07-13 13:49:05.946423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.338 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.946592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.946623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.946762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.946793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.946943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.946975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.947152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.947184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.947356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.947388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.947532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.947564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.947739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.947770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.947961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.947993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.948150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.948181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.948355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.948387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.948561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.948592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.948795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.948826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.949032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.949064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.949242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.949274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.949471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.949502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.949688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.949722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.949878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.949911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.950104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.950136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.950306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.950338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.950540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.950571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.950717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.950749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.950947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.950980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.951116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.951147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.951336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.951371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.951567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.339 [2024-07-13 13:49:05.951599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.339 qpair failed and we were unable to recover it. 00:37:31.339 [2024-07-13 13:49:05.951734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.951765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.951965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.951998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.952151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.952182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.952404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.952440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.952678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.952710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.952906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.952938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.953118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.953150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.953322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.953354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.953529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.953561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.953736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.953772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.953963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.953997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.954150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.954181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.954358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.954390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.954596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.954628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.954801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.954838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.955078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.955110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.955259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.955291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.955466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.955498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.955642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.955675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.955846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.955885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.956030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.956063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.956234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.956266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.956452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.956485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.956667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.956698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.956873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.956906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.340 [2024-07-13 13:49:05.957060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.340 [2024-07-13 13:49:05.957092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.340 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.957302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.957334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.957483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.957515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.957683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.957745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.957918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.957951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.958136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.958168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.958311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.958361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.958517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.958548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.958720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.958751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.958945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.958977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.959154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.959186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.959366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.959397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.959571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.959603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.959787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.959820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.959978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.960010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.960187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.960218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.960417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.960449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.960604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.960636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.960834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.341 [2024-07-13 13:49:05.960872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.341 qpair failed and we were unable to recover it. 00:37:31.341 [2024-07-13 13:49:05.961048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.961081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.961261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.961293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.961472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.961505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.961651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.961683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.961831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.961863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.962014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.962050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.962204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.962236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.962432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.962464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.962611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.962643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.962818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.962849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.963011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.963043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.963241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.963273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.963457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.963488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.963632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.963664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.963813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.963845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.964008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.964040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.964225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.964261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.964476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.964508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.964680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.964712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.964917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.964949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.965099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.965131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.965276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.965308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.965480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.965531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.965741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.965773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.965920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.965952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.966115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.966147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.966362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.342 [2024-07-13 13:49:05.966394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.342 qpair failed and we were unable to recover it. 00:37:31.342 [2024-07-13 13:49:05.966567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.966598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.966772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.966803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.966980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.967012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.967167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.967199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.967377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.967409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.967579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.967611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.967780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.967811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.968029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.968061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.968236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.968267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.968434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.968465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.968631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.968663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.968814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.968846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.969051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.969083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.969232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.969280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.969503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.969534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.969715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.969746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.969897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.969930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.970100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.970131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.970339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.970375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.970585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.970617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.970758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.970789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.970933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.970976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.971175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.971207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.971381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.971412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.971591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.971623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.971794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.971825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.971997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.972029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.972209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.972241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.343 qpair failed and we were unable to recover it. 00:37:31.343 [2024-07-13 13:49:05.972440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.343 [2024-07-13 13:49:05.972471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.972618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.972667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.972839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.972875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.973079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.973111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.973290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.973321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.973459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.973490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.973662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.973694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.973878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.973910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.974082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.974113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.974288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.974320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.974485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.974516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.974721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.974753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.974897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.974930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.975130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.975161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.975327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.975358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.975528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.975560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.975742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.975774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.975922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.975955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.976166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.976201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.976381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.976413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.976579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.976611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.976784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.976815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.977011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.977044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.977191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.977223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.977399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.977431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.977608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.977639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.977783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.977814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.977976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.978009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.978176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.978208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.978344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.978376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.978557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.978593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.978776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.978808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.978987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.979020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.979170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.979202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.979396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.344 [2024-07-13 13:49:05.979432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.344 qpair failed and we were unable to recover it. 00:37:31.344 [2024-07-13 13:49:05.979660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.979692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.979833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.979870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.980069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.980117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.980288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.980320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.980512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.980547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.980739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.980771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.980942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.980975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.981122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.981153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.981303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.981352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.981551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.981582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.981760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.981792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.981937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.981969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.982146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.982179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.982327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.982359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.982532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.982564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.982730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.982761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.982945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.982978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.983184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.983220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.983417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.983449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.983600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.983633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.983788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.983820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.983972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.984004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.984164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.984206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.984358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.984390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.984568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.984600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.984759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.984795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.985014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.985047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.985224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.985256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.985435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.985468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.985614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.985646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.985820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.985851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.986042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.986074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.986276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.986308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.986447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.986479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.986616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.986665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.986833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.986879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.987104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.345 [2024-07-13 13:49:05.987136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.345 qpair failed and we were unable to recover it. 00:37:31.345 [2024-07-13 13:49:05.987319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.987351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.987551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.987582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.987773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.987808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.988066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.988099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.988341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.988374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.988578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.988609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.988790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.988822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.989019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.989051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.989199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.989231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.989428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.989460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.989633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.989664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.989835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.989873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.990049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.990080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.990252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.990284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.990430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.990462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.990667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.990699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.990878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.990910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.991084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.991116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.991287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.991319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.991521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.991553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.991717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.991750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.991928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.991961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.992163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.992211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.992388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.992419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.992579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.992611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.992790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.992840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.993039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.993070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.993250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.993281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.993519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.993554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.993753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.993785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.994010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.994046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.994217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.994253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.994462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.994494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.994671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.346 [2024-07-13 13:49:05.994703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.346 qpair failed and we were unable to recover it. 00:37:31.346 [2024-07-13 13:49:05.994838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.347 [2024-07-13 13:49:05.994875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.347 qpair failed and we were unable to recover it. 00:37:31.347 [2024-07-13 13:49:05.995053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.347 [2024-07-13 13:49:05.995085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.347 qpair failed and we were unable to recover it. 00:37:31.347 [2024-07-13 13:49:05.995259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.347 [2024-07-13 13:49:05.995291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.347 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.995464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.995496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.995679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.995715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.995899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.995932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.996124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.996160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.996386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.996418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.996590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.996622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.996782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.996815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.996995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.997028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.997181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.997213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.997389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.997421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.997570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.997602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.997801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.997833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.998025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.998088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.998264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.998297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.998452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.998484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.998638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.998671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.998859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.998897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.999069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.999119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.999289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.999325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.999559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.999591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:05.999825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:05.999860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.000065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.000097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.000247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.000279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.000451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.000484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.000674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.000709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.000903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.000936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.001115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.001147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.001327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.001359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.001567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.001599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.001800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.001831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.002023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.002055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.002234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.002266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.002456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.002492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.002712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.002744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.002961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.002994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.003227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.003259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.003408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.003440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.003611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.003643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.003820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.348 [2024-07-13 13:49:06.003852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.348 qpair failed and we were unable to recover it. 00:37:31.348 [2024-07-13 13:49:06.004014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.004046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.004180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.004213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.004405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.004448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.004643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.004675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.004842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.004879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.005066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.005098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.005297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.005345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.005543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.005575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.005720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.005752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.005934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.005966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.006116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.006148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.006298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.006331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.006508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.006545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.006723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.006755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.006929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.006962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.007112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.007145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.007388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.007420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.007579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.007611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.007785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.007817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.008000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.008032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.008197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.008232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.008457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.008489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.008660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.008692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.008890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.008939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.009140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.009172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.009343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.009375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.009543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.009576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.009756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.009788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.009937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.009969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.010148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.010180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.010349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.010381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.010571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.010603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.010753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.010785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.010935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.010968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.011115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.011147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.011352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.011384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.011526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.011577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.011794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.011837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.012106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.012157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.012327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.012362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.349 qpair failed and we were unable to recover it. 00:37:31.349 [2024-07-13 13:49:06.012544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.349 [2024-07-13 13:49:06.012578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.012755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.012791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.012995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.013034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.013209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.013243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.013423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.013457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.013624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.013663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.013900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.013934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.014103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.014136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.014330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.014368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.014540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.014574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.014710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.014760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.014930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.014968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.015172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.015207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.015382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.015416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.015562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.015594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.015748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.015783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.016001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.016054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.016288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.016322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.016494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.016527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.016703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.016737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.016910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.016945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.017147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.017180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.017353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.017386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.017580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.017612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.017775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.017810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.018040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.018078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.018303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.018351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.018548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.018583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.018773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.018808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.018968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.019002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.019151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.019184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.019340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.019374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.019551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.019586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.019770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.019802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.019991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.020028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.020229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.020263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.020435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.020468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.020654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.020687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.020874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.020910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.021063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.021096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.350 [2024-07-13 13:49:06.021248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.350 [2024-07-13 13:49:06.021280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.350 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.021462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.021514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.021709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.021748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.021902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.021935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.022107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.022140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.022297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.022329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.022503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.022535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.022708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.022740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.022887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.022921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.023098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.023131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.023374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.023406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.023558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.023591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.023765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.023798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.023976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.024010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.024172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.024204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.024402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.024434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.024631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.024667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.024845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.024882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.025059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.025092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.025301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.025334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.025484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.025520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.025684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.025731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.025943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.025980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.026154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.026187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.026360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.026392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.026560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.026607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.026775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.026811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.027005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.027040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.027192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.027225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.027369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.027401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.027575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.027607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.027809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.027859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.028098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.028130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.028271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.028303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.028457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.028508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.028728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.028765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.028945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.028979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.029136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.029168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.029339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.029370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.029573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.029609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.029799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.351 [2024-07-13 13:49:06.029836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.351 qpair failed and we were unable to recover it. 00:37:31.351 [2024-07-13 13:49:06.030044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.030077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.030252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.030286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.030480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.030513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.030688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.030720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.030896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.030934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.031160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.031192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.031402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.031435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.031618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.031651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.031847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.031886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.032039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.032071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.032248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.032281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.032522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.032578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.032781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.032813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.033038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.033074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.033280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.033314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.033519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.033551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.033749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.033785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.033991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.034028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.034226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.034259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.034507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.034539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.034734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.034766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.034943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.034975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.035141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.035173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.035349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.035392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.035587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.035619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.035792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.035824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.036007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.036040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.036214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.036245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.036568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.036637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.036848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.036885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.037087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.037119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.037311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.037343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.037518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.037551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.037703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.352 [2024-07-13 13:49:06.037734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.352 qpair failed and we were unable to recover it. 00:37:31.352 [2024-07-13 13:49:06.037912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.037945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.038087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.038121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.038296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.038329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.038473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.038504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.038709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.038741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.038894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.038927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.039073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.039106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.039303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.039339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.039541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.039574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.039790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.039823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.040000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.040033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.040201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.040232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.040388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.040420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.040619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.040654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.040852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.040891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.041090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.041122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.041306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.041338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.041589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.041621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.041833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.041872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.042028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.042059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.042237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.042270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.042526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.042558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.042758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.042793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.042992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.043026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.043180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.043213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.043411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.043443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.043689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.043721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.043958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.043991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.044144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.044177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.044347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.044380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.044535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.044568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.044711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.044762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.044970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.045003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.045176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.045208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.045377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.045413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.045570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.045602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.045806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.045838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.046012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.046044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.046213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.046245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.353 [2024-07-13 13:49:06.046413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.353 [2024-07-13 13:49:06.046446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.353 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.046625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.046662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.046805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.046838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.047017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.047049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.047219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.047251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.047470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.047502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.047650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.047682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.047884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.047917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.048115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.048146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.048334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.048366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.048534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.048571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.048832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.048864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.049050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.049082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.049297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.049330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.049473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.049515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.049691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.049725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.049915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.049952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.050171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.050204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.050352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.050384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.050561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.050593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.050787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.050819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.051037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.051070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.051262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.051297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.051493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.051525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.051669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.051703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.051970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.052007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.052233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.052265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.052490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.052525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.052769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.052804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.052992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.053026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.053179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.053212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.053382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.053414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.053584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.053616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.053762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.053795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.054021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.054054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.054228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.054264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.054410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.054444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.054668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.054705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.054888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.054921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.055090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.055122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.055331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.354 [2024-07-13 13:49:06.055381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.354 qpair failed and we were unable to recover it. 00:37:31.354 [2024-07-13 13:49:06.055610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.055642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.055813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.055845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.056048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.056080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.056223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.056256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.057416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.057459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.057684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.057721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.058505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.058547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.058777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.058810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.059005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.059039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.059200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.059233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.059417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.059449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.059603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.059637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.059891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.059942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.060096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.060129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.060347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.060384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.060579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.060611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.060813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.060846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.061039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.061072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.061268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.061301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.061466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.061498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.061719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.061756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.061966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.061999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.062193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.062230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.062403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.062436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.062632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.062665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.062919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.062953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.063102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.063150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.063459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.063506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.063687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.063750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.063968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.064018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.064212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.064247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.064429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.064463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.064661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.064714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.064895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.064929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.065124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.065180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.066318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.066360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.066722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.066788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.067004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.067039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.067240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.067291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.067500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.355 [2024-07-13 13:49:06.067551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.355 qpair failed and we were unable to recover it. 00:37:31.355 [2024-07-13 13:49:06.067759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.356 [2024-07-13 13:49:06.067794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.356 qpair failed and we were unable to recover it. 00:37:31.356 [2024-07-13 13:49:06.067982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.356 [2024-07-13 13:49:06.068032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.356 qpair failed and we were unable to recover it. 00:37:31.356 [2024-07-13 13:49:06.068228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.356 [2024-07-13 13:49:06.068295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.356 qpair failed and we were unable to recover it. 00:37:31.356 [2024-07-13 13:49:06.068497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.356 [2024-07-13 13:49:06.068539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.356 qpair failed and we were unable to recover it. 00:37:31.356 [2024-07-13 13:49:06.068739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.356 [2024-07-13 13:49:06.068776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.356 qpair failed and we were unable to recover it. 00:37:31.356 [2024-07-13 13:49:06.069020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.356 [2024-07-13 13:49:06.069058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.356 qpair failed and we were unable to recover it. 00:37:31.356 [2024-07-13 13:49:06.069245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.356 [2024-07-13 13:49:06.069297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.356 qpair failed and we were unable to recover it. 00:37:31.356 [2024-07-13 13:49:06.069512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.356 [2024-07-13 13:49:06.069552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.356 qpair failed and we were unable to recover it. 00:37:31.356 [2024-07-13 13:49:06.069783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.356 [2024-07-13 13:49:06.069819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.356 qpair failed and we were unable to recover it. 00:37:31.356 [2024-07-13 13:49:06.070035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.356 [2024-07-13 13:49:06.070069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.631 qpair failed and we were unable to recover it. 00:37:31.631 [2024-07-13 13:49:06.070272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.631 [2024-07-13 13:49:06.070320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.631 qpair failed and we were unable to recover it. 00:37:31.631 [2024-07-13 13:49:06.070514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.631 [2024-07-13 13:49:06.070550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.631 qpair failed and we were unable to recover it. 00:37:31.631 [2024-07-13 13:49:06.070752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.631 [2024-07-13 13:49:06.070802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.631 qpair failed and we were unable to recover it. 00:37:31.631 [2024-07-13 13:49:06.071062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.071107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.071338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.071382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.071559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.071603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.071839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.071911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.072099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.072132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.072410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.072505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.072831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.072907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.073715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.073757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.073956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.073991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.074173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.074206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.074401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.074438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.074686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.074723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.074966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.075000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.075170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.075206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.075459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.075495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.075688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.075725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.075908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.075943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.076142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.076191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.076399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.076436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.076616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.076669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.076845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.076891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.077042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.077080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.077253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.077303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.077544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.077600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.077806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.077838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.078015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.078063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.078273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.078326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.078544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.078597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.078856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.078917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.079073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.079106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.079368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.079405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.079590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.079626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.080474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.080516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.080756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.080793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.080992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.081027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.081198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.081234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.632 [2024-07-13 13:49:06.081440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.632 [2024-07-13 13:49:06.081484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.632 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.081642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.081679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.081847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.081910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.082095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.082128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.082303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.082335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.082509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.082542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.082707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.082743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.082972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.083005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.083202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.083238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.083588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.083650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.083843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.083904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.084089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.084122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.084403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.084460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.084729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.084787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.084988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.085022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.085223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.085256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.085478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.085514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.085709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.085745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.085934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.085967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.086144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.086200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.087119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.087174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.087388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.087428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.087695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.087754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.087952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.087985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.088138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.088179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.088391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.088434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.088607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.088641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.088834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.088886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.089055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.089087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.089305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.089341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.089530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.089566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.089723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.089758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.089978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.090010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.090203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.090263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.090491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.090560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.090819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.090891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.091062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.091094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.633 qpair failed and we were unable to recover it. 00:37:31.633 [2024-07-13 13:49:06.091276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.633 [2024-07-13 13:49:06.091308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.091542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.091575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.091759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.091796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.092014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.092051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.092266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.092300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.092481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.092538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.092915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.092970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.093138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.093195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.093429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.093464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.093607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.093642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.093799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.093833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.094015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.094062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.094262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.094327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.094612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.094647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.094822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.094859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.095030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.095064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.095241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.095289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.095541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.095586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.095781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.095814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.095986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.096020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.096176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.096208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.096384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.096416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.096584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.096616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.096770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.096802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.097001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.097034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.097209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.097262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.097552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.097623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.097843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.097883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.098048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.098085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.098297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.098333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.098619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.098676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.098881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.098913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.099097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.099131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.099342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.099395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.634 qpair failed and we were unable to recover it. 00:37:31.634 [2024-07-13 13:49:06.099652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.634 [2024-07-13 13:49:06.099705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.099925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.099957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.100119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.100152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.100310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.100360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.100549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.100605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.100802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.100838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.101030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.101062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.101278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.101330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.101587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.101642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.101812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.101848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.102046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.102078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.102311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.102370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.102672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.102732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.102962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.102994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.103177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.103225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.103426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.103459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.103664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.103700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.103931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.103963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.104166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.104207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.104379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.104415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.104632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.104668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.104864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.104922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.105083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.105115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.105365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.105439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.105716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.105775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.105965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.105998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.106170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.106201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.106363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.106401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.106681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.106737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.106926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.106958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.107111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.107144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.107305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.107355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.107576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.107618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.107816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.107852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.108034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.108071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.108266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.108319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.108638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.635 [2024-07-13 13:49:06.108694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.635 qpair failed and we were unable to recover it. 00:37:31.635 [2024-07-13 13:49:06.108892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.108942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.109100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.109132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.109352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.109388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.109651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.109705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.109897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.109947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.110099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.110131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.110338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.110373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.110655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.110727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.110948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.110982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.111132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.111182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.111446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.111481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.111695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.111741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.111924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.111957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.112111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.112162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.112377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.112423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.112637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.112672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.112880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.112913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.113063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.113095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.113309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.113362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.113596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.113655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.113876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.113927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.114078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.114110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.114403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.114477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.114730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.114784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.114982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.115015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.115206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.115244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.636 [2024-07-13 13:49:06.115517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.636 [2024-07-13 13:49:06.115574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.636 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.115790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.115834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.116024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.116057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.116241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.116277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.116443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.116478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.116680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.116711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.116878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.116928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.117110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.117142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.117416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.117481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.117706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.117738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.117951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.117983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.118162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.118223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.118447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.118504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.118678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.118720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.118885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.118918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.119095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.119127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.119349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.119401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.119629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.119661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.119907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.119958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.120113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.120145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.120336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.120375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.120590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.120621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.120803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.120835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.121004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.121036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.121231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.121272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.121476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.121508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.121678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.121713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.121936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.121969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.122143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.122181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.122393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.122424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.122614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.122650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.122880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.122913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.123090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.123122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.123349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.123381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.123527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.123576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.123746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.123782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.123945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:37:31.637 [2024-07-13 13:49:06.124167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.124214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.124404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.124438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.637 [2024-07-13 13:49:06.124654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.637 [2024-07-13 13:49:06.124707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.637 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.124927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.124961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.125128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.125166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.125372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.125410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.125687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.125723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.125929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.125963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.126142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.126192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.126411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.126447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.126676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.126713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.126893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.126944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.127119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.127184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.127494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.127533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.127701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.127737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.127961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.127995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.128144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.128194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.128392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.128427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.128612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.128651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.128839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.128918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.129076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.129109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.129397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.129445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.129673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.129709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.129890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.129923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.130068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.130101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.130304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.130340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.130526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.130561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.130788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.130824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.131039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.131076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.131249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.131281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.131512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.131564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.131789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.131824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.131991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.132024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.132156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.132188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.132392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.132427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.132623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.132674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.132898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.132931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.133081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.133114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.133354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.133390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.133673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.133726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.638 qpair failed and we were unable to recover it. 00:37:31.638 [2024-07-13 13:49:06.133944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.638 [2024-07-13 13:49:06.133976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.134120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.134152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.134371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.134424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.134718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.134771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.134972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.135005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.135152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.135193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.135364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.135397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.135657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.135710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.135935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.135968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.136135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.136170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.136410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.136463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.136649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.136684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.136875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.136926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.137076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.137108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.137356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.137388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.137583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.137619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.137783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.137820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.138020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.138052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.138221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.138253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.138492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.138546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.138739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.138775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.138987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.139020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.139171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.139203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.139405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.139440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.139630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.139665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.139875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.139908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.140055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.140087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.140281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.140313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.140553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.140612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.140795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.140830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.141039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.141087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.141284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.141335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.141586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.141650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.141849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.141894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.142074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.639 [2024-07-13 13:49:06.142106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.639 qpair failed and we were unable to recover it. 00:37:31.639 [2024-07-13 13:49:06.142354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.142395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.142651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.142708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.142938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.142970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.143132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.143179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.143399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.143467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.143763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.143818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.144015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.144048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.144218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.144256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.144457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.144509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.144719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.144773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.144945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.144978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.145132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.145164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.145354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.145407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.145632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.145684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.145880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.145930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.146092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.146125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.146303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.146338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.146578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.146632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.146822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.146859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.147084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.147132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.147402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.147457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.147771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.147829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.148005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.148037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.148213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.148263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.148469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.148521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.148694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.148727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.148922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.148976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.149164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.149196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.149403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.149453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.149626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.149657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.149859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.149898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.150069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.150120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.150378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.150430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.150640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.150695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.640 [2024-07-13 13:49:06.150880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.640 [2024-07-13 13:49:06.150913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.640 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.151109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.151166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.151364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.151415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.151698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.151751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.151920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.151955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.152177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.152228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.152424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.152489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.152673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.152729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.152947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.152981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.153168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.153200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.153409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.153444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.153673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.153726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.153913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.153946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.154103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.154156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.154348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.154384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.154548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.154584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.154781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.154817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.155011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.155043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.155227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.155259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.155461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.155510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.155749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.155785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.155993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.156026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.156215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.156251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.156449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.156484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.156718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.156755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.156966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.156999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.157207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.641 [2024-07-13 13:49:06.157242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.641 qpair failed and we were unable to recover it. 00:37:31.641 [2024-07-13 13:49:06.157432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.157468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.157701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.157737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.157970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.158002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.158174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.158207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.158402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.158437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.158620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.158656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.158856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.158895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.159046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.159078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.159284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.159319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.159498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.159552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.159712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.159748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.159929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.159962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.160168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.160220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.160457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.160520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.160720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.160770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.160950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.160984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.161188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.161221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.161428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.161461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.161606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.161638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.161786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.161818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.162007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.162039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.162188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.162221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.162367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.162399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.162611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.162644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.162824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.162859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.163057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.163106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.163328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.163379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.163645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.163696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.163930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.163994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.164250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.164301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.164561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.164617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.165371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.165409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.165655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.165707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.165913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.165966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.166171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.642 [2024-07-13 13:49:06.166220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.642 qpair failed and we were unable to recover it. 00:37:31.642 [2024-07-13 13:49:06.166428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.166478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.166685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.166717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.166861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.166900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.167088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.167138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.167381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.167435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.167664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.167703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.167917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.167952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.168169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.168213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.168398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.168430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.168592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.168625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.168820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.168857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.169060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.169092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.169598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.169639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.169876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.169910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.170091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.170123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.170304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.170337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.170568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.170620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.170805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.170842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.171054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.171098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.171251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.171283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.171440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.171474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.171694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.171726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.171897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.171930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.172109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.172141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.172320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.172352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.172647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.172684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.172863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.172921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.173095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.173127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.173423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.173455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.173681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.173718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.173894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.173927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.174133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.174167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.174395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.174435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.174637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.174674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.174874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.174906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.175045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.175087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.175297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.175332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.643 [2024-07-13 13:49:06.175586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.643 [2024-07-13 13:49:06.175622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.643 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.175860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.175898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.176078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.176110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.176266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.176298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.176470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.176506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.176666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.176701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.176939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.176972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.177169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.177204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.177430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.177465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.177656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.177691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.177862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.177909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.178099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.178132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.178380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.178412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.178623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.178655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.178795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.178826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.178981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.179012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.179204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.179251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.179409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.179444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.179628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.179663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.179900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.179949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.180109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.180146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.180364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.180399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.180642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.180678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.180885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.180919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.181094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.181126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.181305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.181337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.181488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.181537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.181751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.181787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.181964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.181997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.182167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.182198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.182343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.182375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.182540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.182571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.182743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.182775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.182951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.182984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.183142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.644 [2024-07-13 13:49:06.183179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.644 qpair failed and we were unable to recover it. 00:37:31.644 [2024-07-13 13:49:06.183337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.183386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.183584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.183620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.183815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.183851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.184033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.184065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.184246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.184278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.184429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.184461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.184640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.184671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.184880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.184913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.185082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.185113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.185293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.185325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.185513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.185545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.185698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.185737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.185923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.185955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.186096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.186129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.186286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.186318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.186495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.186527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.186709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.186741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.186927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.186960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.187138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.187171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.187343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.187375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.187582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.187613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.187775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.187807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.188000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.188034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.188235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.188270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.188480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.188516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.188728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.188764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.188947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.188979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.189158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.189200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.189452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.189483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.189654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.189686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.189861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.189899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.190050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.190082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.190242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.190274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.645 qpair failed and we were unable to recover it. 00:37:31.645 [2024-07-13 13:49:06.190451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.645 [2024-07-13 13:49:06.190483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.190691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.190723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.190889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.190922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.191064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.191096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.191297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.191329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.191542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.191574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.191759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.191796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.191956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.191989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.192138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.192170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.192345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.192376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.192548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.192580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.192760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.192792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.192969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.193002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.193182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.193214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.193370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.193402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.193608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.193640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.193855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.193893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.194081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.194112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.194298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.194330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.194484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.194516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.194718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.194753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.194957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.194989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.195139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.195179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.195324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.195356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.195526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.195557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.195709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.195741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.195962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.195995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.196175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.196207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.196355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.196387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.196558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.196590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.196746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.196778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.196934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.646 [2024-07-13 13:49:06.196966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.646 qpair failed and we were unable to recover it. 00:37:31.646 [2024-07-13 13:49:06.197173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.197209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.197444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.197480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.197675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.197707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.197855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.197892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.198075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.198107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.198284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.198316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.198471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.198502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.198676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.198708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.198882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.198919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.199072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.199103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.199283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.199315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.199521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.199553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.199705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.199736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.199900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.199934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.200089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.200120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.200265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.200303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.200493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.200525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.200672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.200705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.200921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.200954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.201128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.201159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.201320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.201352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.201548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.201580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.201753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.201784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.201972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.202006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.202182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.202213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.202349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.202381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.202563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.202605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.202789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.202821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.202984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.203016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.203184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.203216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.203425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.203457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.203630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.203662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.203873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.203910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.204057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.204088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.204234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.204272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.204473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.204504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.204655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.204686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.204860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.204897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.205079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.647 [2024-07-13 13:49:06.205111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.647 qpair failed and we were unable to recover it. 00:37:31.647 [2024-07-13 13:49:06.205295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.205327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.205542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.205578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.205769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.205801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.205981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.206013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.206155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.206187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.206396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.206428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.206574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.206606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.206781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.206813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.207035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.207067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.207262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.207294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.207503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.207534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.207678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.207710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.207893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.207925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.208080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.208112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.208325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.208356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.208535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.208573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.208762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.208793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.208999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.209032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.209204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.209236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.209382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.209413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.209595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.209627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.209891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.209924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.210103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.210134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.210338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.210370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.210544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.210580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.210805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.210837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.210993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.211027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.211205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.211237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.211453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.211485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.211696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.211736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.211973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.212006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.212201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.212237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.212463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.212495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.212668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.212700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.212841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.212878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.213051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.213083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.213225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.213257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.213437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.648 [2024-07-13 13:49:06.213469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.648 qpair failed and we were unable to recover it. 00:37:31.648 [2024-07-13 13:49:06.213652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.213684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.213835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.213873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.214100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.214135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.214393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.214429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.214622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.214657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.214877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.214913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.215095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.215126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.215341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.215374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.215537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.215570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.215764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.215799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.216018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.216052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.216226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.216259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.216418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.216450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.216648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.216690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.216895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.216928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.217088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.217119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.217263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.217295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.217550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.217583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.217974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.218010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.218219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.218252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.218426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.218458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.218630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.218662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.218809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.218841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.219018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.219050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.219221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.219262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.219401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.219433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.219577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.219609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.219759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.219791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.219973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.220006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.220158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.220190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.220368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.220400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.220546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.220578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.220755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.220787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.221003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.221036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.221212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.649 [2024-07-13 13:49:06.221243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.649 qpair failed and we were unable to recover it. 00:37:31.649 [2024-07-13 13:49:06.221446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.221479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.221681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.221712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.221858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.221895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.222091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.222123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.222347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.222379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.222551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.222583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.222760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.222809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.223039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.223071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.223255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.223291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.223483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.223514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.223655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.223687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.223892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.223925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.224074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.224106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.224280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.224312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.224461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.224492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.224700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.224732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.224900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.224933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.225115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.225149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.225324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.225355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.225535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.225566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.225745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.225777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.225954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.225995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.226200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.226232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.226383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.226415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.226571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.226608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.226781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.226813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.227027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.227059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.227214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.227247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.227431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.227463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.227617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.227649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.227819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.227857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.228008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.228040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.228247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.228283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.228430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.228462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.228636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.228669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.228877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.228914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.229084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.229115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.650 [2024-07-13 13:49:06.229358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.650 [2024-07-13 13:49:06.229390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.650 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.229567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.229598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.229803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.229836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.230042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.230105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.230337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.230391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.230640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.230693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.230839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.230882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.231071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.231107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.231354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.231408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.231668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.231701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.231926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.231958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.232140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.232202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.232537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.232572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.232795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.232830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.233056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.233088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.233321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.233378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.233569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.233604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.233833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.233870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.234046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.234077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.234230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.234261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.234401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.234433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.234704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.234740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.234960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.234992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.235186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.235221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.235532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.235587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.235807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.235840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.236060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.236092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.651 [2024-07-13 13:49:06.236266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.651 [2024-07-13 13:49:06.236299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.651 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.236473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.236508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.236712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.236748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.236965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.236998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.237160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.237191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.237372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.237404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.237664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.237714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.237889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.237939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.238110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.238142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.238298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.238331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.238520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.238552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.238699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.238753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.238972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.239006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.239180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.239214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.239364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.239396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.239590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.239625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.239785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.239822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.240017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.240050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.240270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.240301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.240474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.240505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.240668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.240700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.240880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.240912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.241069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.241100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.241341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.241377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.241640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.241690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.241943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.241975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.242157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.242189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.242355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.242390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.242581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.242616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.242806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.242841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.243077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.243109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.243277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.243308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.243479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.243515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.243788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.243823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.244018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.244050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.244233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.244265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.244419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.652 [2024-07-13 13:49:06.244450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.652 qpair failed and we were unable to recover it. 00:37:31.652 [2024-07-13 13:49:06.244609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.244640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.244828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.244864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.245061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.245092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.245293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.245328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.245546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.245581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.245790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.245822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.246005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.246048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.246253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.246284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.246467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.246499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.246680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.246712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.246878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.246910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.247095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.247157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.247315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.247347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.247539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.247571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.247736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.247772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.247959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.247991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.248161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.248199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.248401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.248439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.248606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.248638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.248818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.248854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.249058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.249090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.249293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.249325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.249520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.249555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.249732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.249763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.249910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.249942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.250118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.250150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.250373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.250405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.250583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.250614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.250814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.250854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.251068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.251101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.251277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.251309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.251484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.251515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.251684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.251715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.251890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.251923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.252108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.653 [2024-07-13 13:49:06.252140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.653 qpair failed and we were unable to recover it. 00:37:31.653 [2024-07-13 13:49:06.252348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.252379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.252520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.252551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.252740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.252772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.252922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.252955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.253121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.253153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.253326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.253358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.253536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.253567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.253746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.253778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.253977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.254009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.254179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.254211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.254395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.254427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.254607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.254638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.254832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.254873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.255034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.255066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.255222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.255254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.255442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.255474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.255638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.255674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.255863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.255909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.256097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.256129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.256297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.256332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.256503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.256535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.256691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.256722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.256920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.256953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.257091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.257122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.257326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.257357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.257527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.257559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.257741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.257772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.257946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.257982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.258160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.258191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.258361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.258392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.258571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.258603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.258753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.258785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.258965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.258996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.259173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.259204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.259375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.259406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.654 [2024-07-13 13:49:06.259587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.654 [2024-07-13 13:49:06.259618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.654 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.259805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.259837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.260004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.260035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.260207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.260239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.260418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.260450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.260615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.260656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.260870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.260902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.261075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.261107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.261259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.261290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.261488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.261520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.261663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.261703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.261845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.261882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.262071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.262106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.262306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.262339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.262487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.262519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.262692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.262724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.262934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.262966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.263140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.263190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.263414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.263452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.263645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.263683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.264753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.264797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.265035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.265068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.265243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.265276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.265443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.265475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.265670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.265711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.265931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.265963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.266139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.266171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.266329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.266360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.266533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.266565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.266712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.266744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.266964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.266996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.267208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.267239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.267417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.267448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.267591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.267623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.267806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.655 [2024-07-13 13:49:06.267837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.655 qpair failed and we were unable to recover it. 00:37:31.655 [2024-07-13 13:49:06.267995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.268027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.268199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.268235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.268459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.268491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.268669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.268703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.268988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.269037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.269235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.269266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.269437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.269468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.269660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.269695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.269927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.269959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.270116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.270165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.270356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.270391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.270586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.270618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.270811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.270846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.271042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.271073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.271218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.271249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.271444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.271478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.271665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.271700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.271888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.271920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.272080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.272111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.272316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.272351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.272576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.272607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.272838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.272886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.273087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.273118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.273318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.273349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.273522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.273557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.273771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.273805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.273979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.274011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.656 qpair failed and we were unable to recover it. 00:37:31.656 [2024-07-13 13:49:06.274203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.656 [2024-07-13 13:49:06.274238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.274514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.274548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.274787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.274823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.275058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.275090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.275298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.275333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.275560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.275592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.275828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.275863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.276093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.276134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.276313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.276344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.276488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.276519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.276706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.276741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.276977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.277010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.277203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.277239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.277429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.277464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.277660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.277692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.277885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.277920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.278119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.278151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.278292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.278324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.278469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.278501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.278676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.278708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.278877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.278908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.279078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.279109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.279259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.279290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.279436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.279467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.279685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.279720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.279933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.279969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.280162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.280194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.280409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.280444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.280634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.280669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.280937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.280969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.281121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.281170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.281334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.281369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.281547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.281578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.281765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.281800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.282023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.282055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.282236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.282267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.282488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.282523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.282725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.282760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.282954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.282986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.283186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.283222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.657 qpair failed and we were unable to recover it. 00:37:31.657 [2024-07-13 13:49:06.283412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.657 [2024-07-13 13:49:06.283448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.283649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.283681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.283899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.283940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.284108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.284143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.284339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.284370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.284564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.284599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.284800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.284836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.285041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.285073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.285240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.285275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.285508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.285540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.285692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.285724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.285918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.285954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.286144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.286179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.286385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.286417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.286594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.286626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.286801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.286833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.287008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.287040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.287233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.287269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.287492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.287528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.287732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.287763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.287997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.288033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.288224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.288259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.288456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.288487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.288685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.288720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.288906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.288942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.289129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.289165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.289320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.289351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.289571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.289606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.289798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.289833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.290029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.290061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.290256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.290291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.290495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.290537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.290736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.290783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.290956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.290991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.291183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.291214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.291386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.291417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.658 [2024-07-13 13:49:06.291584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.658 [2024-07-13 13:49:06.291621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.658 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.291833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.291869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.292095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.292130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.292321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.292356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.292546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.292577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.292772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.292808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.293019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.293055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.293226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.293258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.293446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.293481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.293644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.293678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.293886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.293919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.294087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.294122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.294321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.294356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.294556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.294589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.294815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.294856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.295042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.295074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.295273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.295304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.295457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.295489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.295679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.295714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.295909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.295941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.296170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.296204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.296365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.296400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.296600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.296631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.296783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.296815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.297045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.297076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.297224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.297256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.297480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.297516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.297709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.297744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.297946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.297979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.298171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.298206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.298431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.298466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.298661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.298693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.298913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.298948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.299170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.299205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.299383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.299416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.299591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.299623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.299765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.299797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.659 qpair failed and we were unable to recover it. 00:37:31.659 [2024-07-13 13:49:06.299983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.659 [2024-07-13 13:49:06.300015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.300179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.300214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.300415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.300450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.300639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.300670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.300875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.300911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.301071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.301107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.301333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.301364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.301596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.301631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.301790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.301825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.302026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.302063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.302269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.302304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.302492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.302527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.302722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.302754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.302953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.302989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.303152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.303187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.303349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.303381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.303597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.303631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.303823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.303857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.304034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.304066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.304300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.304335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.304514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.304548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.304744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.304775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.304976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.305023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.305227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.305259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.305409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.305440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.305588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.305619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.305819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.305854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.306027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.306058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.306233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.306265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.306417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.306449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.306623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.306655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.306845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.306886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.307054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.307086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.307259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.307291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.307450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.307482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.307687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.660 [2024-07-13 13:49:06.307736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.660 qpair failed and we were unable to recover it. 00:37:31.660 [2024-07-13 13:49:06.307947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.307979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.308174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.308209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.308370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.308405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.308603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.308635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.308809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.308845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.309044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.309080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.309300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.309331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.309500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.309535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.309717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.309752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.309940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.309972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.310164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.310199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.310358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.310393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.310586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.310618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.310785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.310826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.311064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.311096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.311272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.311305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.311525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.311559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.311732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.311767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.311960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.311992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.312153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.312189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.312378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.312413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.312609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.312641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.312864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.312904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.313068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.313104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.313306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.313337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.313555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.313591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.313782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.313818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.313995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.314027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.314246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.314281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.661 qpair failed and we were unable to recover it. 00:37:31.661 [2024-07-13 13:49:06.314511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.661 [2024-07-13 13:49:06.314546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.314798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.314833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.315054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.315087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.315279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.315314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.315477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.315508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.315680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.315712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.315886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.315919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.316151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.316183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.316387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.316424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.316589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.316624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.316795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.316826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.317057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.317093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.317262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.317297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.317487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.317519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.317707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.317742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.317936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.317972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.318196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.318227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.318434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.318469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.318627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.318663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.318831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.318862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.319064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.319099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.319303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.319344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.319522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.319553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.319715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.319750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.319939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.319980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.320184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.320216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.320418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.320467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.320662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.320697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.320893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.320925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.321118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.321153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.321315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.321351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.321517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.321549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.321711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.321746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.321947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.321980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.322180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.662 [2024-07-13 13:49:06.322211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.662 qpair failed and we were unable to recover it. 00:37:31.662 [2024-07-13 13:49:06.322431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.322466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.322653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.322688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.322857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.322894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.323095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.323131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.323302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.323337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.323534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.323566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.323701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.323733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.323925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.323961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.324160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.324192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.324393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.324427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.324596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.324631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.324817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.324852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.325026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.325058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.325196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.325228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.325427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.325459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.325622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.325657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.325818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.325853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.326049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.326081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.326300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.326335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.326523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.326558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.326758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.326791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.326965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.326998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.327223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.327258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.327483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.327514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.327698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.327733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.327900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.327936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.328157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.328188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.328361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.328396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.328593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.328629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.328799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.328835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.329030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.329066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.329254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.329290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.329466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.329499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.329718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.329754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.329945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.329981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.330176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.330208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.330381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.330413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.330605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.330640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.663 qpair failed and we were unable to recover it. 00:37:31.663 [2024-07-13 13:49:06.330839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.663 [2024-07-13 13:49:06.330876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.331072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.331107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.331268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.331303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.331527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.331559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.331742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.331774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.331935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.331968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.332165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.332197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.332437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.332469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.332671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.332706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.332988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.333020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.333244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.333279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.333445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.333480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.333702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.333743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.333956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.333992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.334210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.334245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.334451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.334483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.334703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.334739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.334923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.334959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.335130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.335162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.335359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.335408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.335580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.335615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.335818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.335849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.336034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.336066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.336260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.336296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.336496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.336527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.336719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.336755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.336960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.336993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.337168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.337200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.337392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.337427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.337615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.337650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.337880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.337914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.338065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.338101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.338293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.338329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.338553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.338585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.338808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.338843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.339018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.339055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.664 qpair failed and we were unable to recover it. 00:37:31.664 [2024-07-13 13:49:06.339282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.664 [2024-07-13 13:49:06.339314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.339471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.339506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.339712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.339744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.339943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.339975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.340155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.340187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.340363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.340395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.340594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.340625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.340825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.340860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.341033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.341065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.341237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.341268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.341464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.341499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.341663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.341698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.341918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.341950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.342149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.342184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.342339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.342375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.342575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.342607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.342797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.342832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.343010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.343044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.343188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.343220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.343370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.343402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.343547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.343579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.343754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.343786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.343958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.343998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.344183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.344218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.344412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.344444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.344633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.344668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.344832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.344872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.345068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.345100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.345292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.345328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.345514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.345549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.665 [2024-07-13 13:49:06.345743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.665 [2024-07-13 13:49:06.345774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.665 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.345970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.346006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.346167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.346202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.346393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.346425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.346644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.346679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.346843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.346884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.347093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.347125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.347274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.347306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.347446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.347494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.347665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.347697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.347854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.347916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.348102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.348137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.348334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.348365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.348563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.348598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.348779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.348815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.349042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.349074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.349221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.349253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.349403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.349452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.349619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.349651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.349844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.349885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.350103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.350134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.350341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.350373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.350575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.350606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.350803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.350835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.351057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.351089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.351265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.351296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.351494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.351525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.351666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.351698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.351948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.351984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.352185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.352217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.352388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.352419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.352586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.352621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.352772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.352812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.666 [2024-07-13 13:49:06.353004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.666 [2024-07-13 13:49:06.353036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.666 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.353184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.353233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.353426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.353461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.353686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.353718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.353914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.353949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.354143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.354175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.354349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.354381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.354538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.354574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.354755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.354791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.355013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.355045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.355242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.355289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.355473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.355509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.355707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.355739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.355930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.355962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.356131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.356179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.356355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.356388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.356619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.356654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.356813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.356848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.357049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.357081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.357316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.357359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.357585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.357617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.357828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.357859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.358037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.358073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.358283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.358318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.358512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.358543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.358758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.358793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.359006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.359040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.359212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.359248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.667 [2024-07-13 13:49:06.359422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.667 [2024-07-13 13:49:06.359454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.667 qpair failed and we were unable to recover it. 00:37:31.944 [2024-07-13 13:49:06.359636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.944 [2024-07-13 13:49:06.359668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.944 qpair failed and we were unable to recover it. 00:37:31.944 [2024-07-13 13:49:06.359853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.944 [2024-07-13 13:49:06.359891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.944 qpair failed and we were unable to recover it. 00:37:31.944 [2024-07-13 13:49:06.360086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.944 [2024-07-13 13:49:06.360122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.944 qpair failed and we were unable to recover it. 00:37:31.944 [2024-07-13 13:49:06.360287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.360322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.360491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.360523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.360699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.360733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.360929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.360978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.361169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.361201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.361399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.361437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.361617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.361649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.361824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.361861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.362081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.362114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.362262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.362304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.362533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.362566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.362776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.362813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.363023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.363055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.363204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.363239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.363450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.363486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.363662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.363698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.363890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.363948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.364168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.364210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.364391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.364427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.364624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.364656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.364872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.364908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.365091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.365127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.365352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.365384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.365554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.365589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.365782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.365813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.366013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.366045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.366244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.366279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.366484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.366517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.366719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.366751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.366943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.366978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.367160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.367195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.367397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.367428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.367646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.945 [2024-07-13 13:49:06.367680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.945 qpair failed and we were unable to recover it. 00:37:31.945 [2024-07-13 13:49:06.367875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.367911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.368126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.368158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.368354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.368388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.368603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.368638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.368833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.368864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.369038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.369074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.369255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.369287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.369434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.369465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.369689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.369724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.369940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.369975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.370173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.370205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.370420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.370455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.370623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.370658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.370834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.370871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.371028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.371064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.371215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.371247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.371426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.371457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.371629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.371661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.371798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.371830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.372027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.372067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.372296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.372336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.372543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.372578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.372774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.372809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.372983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.373015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.373188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.373219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.373404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.373435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.373626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.373662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.373859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.373900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.374104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.374135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.374370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.374402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.374574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.374605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.374749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.374781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.374997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.375033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.375261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.375292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.375469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.375501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.375652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.375684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.375878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.375914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.376108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.946 [2024-07-13 13:49:06.376141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.946 qpair failed and we were unable to recover it. 00:37:31.946 [2024-07-13 13:49:06.376319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.376351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.376554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.376589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.376775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.376817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.377051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.377087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.377275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.377310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.377480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.377512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.377665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.377696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.377871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.377903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.378086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.378117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.378310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.378345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.378502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.378538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.378730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.378761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.378916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.378948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.379163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.379199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.379394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.379425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.379578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.379627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.379810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.379850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.380090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.380122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.380320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.380355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.380506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.380541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.380715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.380747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.380904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.380936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.381125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.381160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.381357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.381389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.381555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.381591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.381775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.381810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.381984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.382016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.382168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.382200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.382345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.382395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.382620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.382652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.382834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.382874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.383040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.383075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.383269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.383301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.383525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.383557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.383736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.383767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.383948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.947 [2024-07-13 13:49:06.383980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.947 qpair failed and we were unable to recover it. 00:37:31.947 [2024-07-13 13:49:06.384212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.384244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.384421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.384453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.384663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.384696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.384884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.384919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.385110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.385146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.385344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.385376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.385563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.385598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.385790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.385825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.386028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.386060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.386248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.386284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.386506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.386541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.386804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.386839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.387039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.387071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.387241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.387276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.387464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.387495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.387717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.387752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.387918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.387954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.388160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.388191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.388425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.388457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.388607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.388641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.388791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.388826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.389028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.389063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.389282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.389318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.389511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.389542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.389715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.389751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.389975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.390011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.390217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.390249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.390445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.390480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.390672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.390707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.390877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.390910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.391104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.391150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.391348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.391380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.391567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.391598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.391793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.391829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.392015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.392047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.392223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.392255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.392447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.392482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.392679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.392714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.392902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.392934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.948 [2024-07-13 13:49:06.393090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.948 [2024-07-13 13:49:06.393122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.948 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.393291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.393323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.393471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.393503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.393697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.393732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.393943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.393976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.394147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.394178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.394389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.394421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.394565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.394612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.394783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.394815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.395051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.395087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.395271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.395306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.395496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.395528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.395699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.395734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.395915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.395951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.396176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.396208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.396435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.396470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.396671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.396707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.396903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.396935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.397133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.397168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.397332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.397368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.397563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.397594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.397819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.397858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.398027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.398062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.398253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.398285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.398475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.398510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.398726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.398761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.398950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.398983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.399175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.399210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.399425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.399460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.949 [2024-07-13 13:49:06.399665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.949 [2024-07-13 13:49:06.399697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.949 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.399898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.399936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.400144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.400180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.400376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.400408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.400568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.400604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.400797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.400832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.401039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.401094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.401294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.401345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.401540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.401575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.401778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.401809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.402032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.402068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.402303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.402338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.402552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.402583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.402753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.402788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.402962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.402994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.403135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.403166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.403357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.403392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.403572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.403607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.403798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.403830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.404105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.404136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.404366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.404401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.404659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.404690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.404887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.404923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.405111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.405146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.405339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.405371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.405559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.405594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.405781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.405826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.406031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.406063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.406213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.406244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.406438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.406473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.406639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.406671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.406846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.406883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.407043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.407083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.407300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.407331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.407526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.407561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.407745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.407780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.407976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.950 [2024-07-13 13:49:06.408008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.950 qpair failed and we were unable to recover it. 00:37:31.950 [2024-07-13 13:49:06.408226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.408261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.408426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.408461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.408658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.408689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.408886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.408923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.409116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.409152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.409339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.409370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.409562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.409598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.409837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.409879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.410073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.410105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.410304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.410339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.410531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.410567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.410814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.410849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.411026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.411059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.411259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.411293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.411514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.411545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.411720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.411756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.411919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.411954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.412131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.412163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.412360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.412408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.412587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.412619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.412793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.412825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.413023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.413059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.413277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.413313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.413531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.413563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.413760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.413798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.413996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.414031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.414196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.414228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.414411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.414444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.414633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.414666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.414873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.414905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.415095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.415131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.415356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.415389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.415538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.415569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.415743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.415775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.416006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.416041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.416262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.416297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.416461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.416496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.951 qpair failed and we were unable to recover it. 00:37:31.951 [2024-07-13 13:49:06.416711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.951 [2024-07-13 13:49:06.416746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.416943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.416976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.417129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.417160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.417310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.417341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.417513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.417546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.417768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.417803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.418002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.418034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.418217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.418249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.418420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.418455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.418646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.418682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.418883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.418914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.419123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.419158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.419378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.419413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.419646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.419678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.419882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.419915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.420087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.420119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.420296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.420336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.420505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.420540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.420730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.420765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.420965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.420997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.421158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.421194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.421387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.421423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.421643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.421674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.421902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.421938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.422163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.422198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.422396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.422428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.422603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.422634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.422822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.422857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.423034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.423065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.423231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.423266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.423457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.423492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.423685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.423717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.423913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.423949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.424113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.424148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.424370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.424402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.424564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.424599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.424816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.952 [2024-07-13 13:49:06.424851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.952 qpair failed and we were unable to recover it. 00:37:31.952 [2024-07-13 13:49:06.425060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.425091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.425307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.425347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.425536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.425571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.425793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.425825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.426015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.426047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.426259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.426294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.426479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.426510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.426679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.426713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.426919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.426951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.427100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.427132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.427355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.427386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.427564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.427599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.427823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.427855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.428069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.428100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.428251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.428302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.428491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.428523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.428713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.428748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.428940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.428976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.429183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.429214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.429382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.429417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.429629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.429664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.429862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.429898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.430105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.430145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.430366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.430401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.430618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.430649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.430801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.430833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.430992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.431024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.431172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.431203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.431382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.431413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.431605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.431636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.431798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.431830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.431986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.432018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.432204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.432236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.432434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.432465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.432638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.432672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.432831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.432872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.433061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.433093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.953 qpair failed and we were unable to recover it. 00:37:31.953 [2024-07-13 13:49:06.433265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.953 [2024-07-13 13:49:06.433296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.433440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.433471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.433657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.433694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.433946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.433978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.434180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.434235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.434431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.434463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.434612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.434653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.434838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.434875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.435066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.435098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.435245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.435277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.435451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.435483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.435659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.435690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.435889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.435921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.436137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.436176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.436386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.436417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.436583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.436614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.436815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.436846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.437053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.437085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.437260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.437292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.437467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.437498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.437631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.437663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.437857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.437893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.438069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.438101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.438272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.438304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.438446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.438477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.438646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.438678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.438854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.438893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.954 [2024-07-13 13:49:06.439107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.954 [2024-07-13 13:49:06.439139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.954 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.439337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.439368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.439548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.439579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.439746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.439778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.439957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.439991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.440162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.440193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.440378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.440410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.440616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.440647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.440841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.440879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.441077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.441108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.441252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.441284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.441479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.441511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.441667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.441698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.441882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.441915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.442113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.442145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.442344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.442376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.442521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.442552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.442700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.442736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.442896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.442928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.443108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.443140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.443286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.443318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.443487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.443519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.443715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.443747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.443932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.443964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.444112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.444144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.444289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.444322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.444466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.444497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.444713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.444749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.444913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.444949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.445159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.445191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.445364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.445395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.445571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.445619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.445818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.445850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.955 [2024-07-13 13:49:06.446012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.955 [2024-07-13 13:49:06.446044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.955 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.446200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.446232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.446405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.446436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.446584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.446616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.446796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.446828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.447004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.447036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.447209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.447241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.447430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.447462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.447616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.447648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.447820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.447854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.448036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.448078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.448253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.448315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.448558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.448613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.448816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.448850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.449013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.449046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.449243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.449279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.449470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.449510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.449750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.449786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.449972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.450004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.450177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.450209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.450382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.450415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.450602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.450636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.450826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.450861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.451057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.451088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.451258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.451293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.451508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.451540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.451767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.451802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.451993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.452025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.452186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.452217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.452480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.452515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.452716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.452752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.452960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.452993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.453164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.453196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.453356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.453388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.453579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.453614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.453770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.453806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.453986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.956 [2024-07-13 13:49:06.454018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.956 qpair failed and we were unable to recover it. 00:37:31.956 [2024-07-13 13:49:06.454208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.454244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.454474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.454510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.454701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.454739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.454898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.454947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.455089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.455121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.455294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.455333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.455535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.455570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.455731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.455766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.455975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.456007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.456186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.456218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.456390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.456422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.456593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.456624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.456781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.456814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.456997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.457029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.457171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.457206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.457429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.457462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.457635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.457671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.457862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.457918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.458083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.458114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.458312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.458353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.458525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.458560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.458757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.458793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.459010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.459042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.459197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.459229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.459427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.459459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.459666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.459701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.459914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.459947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.460126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.460158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.460338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.460370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.460535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.460567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.460736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.460767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.460944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.460976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.461150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.461182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.461351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.461382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.461587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.461619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.461796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.461829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.462017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.462050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.957 qpair failed and we were unable to recover it. 00:37:31.957 [2024-07-13 13:49:06.462264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.957 [2024-07-13 13:49:06.462296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.462475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.462506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.462681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.462713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.462911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.462953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.463108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.463140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.463319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.463350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.463524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.463555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.463753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.463784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.463949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.463981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.464230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.464261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.464417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.464449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.464594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.464625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.464824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.464855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.465011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.465042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.465190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.465221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.465420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.465455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.465608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.465642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.465858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.465904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.466097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.466128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.466328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.466368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.466515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.466546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.466726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.466759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.466902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.466934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.467073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.467105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.467273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.467305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.467479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.467512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.467685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.467717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.467893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.467925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.468103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.468134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.468284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.468315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.468483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.468514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.468662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.468693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.468846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.468883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.469057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.469088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.958 qpair failed and we were unable to recover it. 00:37:31.958 [2024-07-13 13:49:06.469279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.958 [2024-07-13 13:49:06.469310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.469458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.469490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.469671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.469702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.469901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.469933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.470102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.470134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.470279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.470310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.470499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.470530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.470682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.470714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.470894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.470927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.471078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.471109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.471276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.471311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.471545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.471576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.471726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.471758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.471960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.471993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.472164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.472195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.472396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.472428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.472605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.472636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.472773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.472804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.472993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.473025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.473224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.473256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.473402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.473434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.473661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.473696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.473860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.473901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.474121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.474167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.474345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.474376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.474535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.474566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.474751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.474782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.474955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.959 [2024-07-13 13:49:06.474987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.959 qpair failed and we were unable to recover it. 00:37:31.959 [2024-07-13 13:49:06.475180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.475211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.475380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.475411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.475628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.475661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.475831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.475862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.476032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.476065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.476266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.476327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.476534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.476569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.476772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.476803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.476989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.477021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.477172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.477204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.477379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.477411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.477585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.477616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.477796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.477828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.478010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.478042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.478217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.478249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.478418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.478450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.478594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.478625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.478804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.478836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.479047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.479080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.479221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.479253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.479394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.479426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.479575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.479606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.479813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.479848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.480092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.480125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.480268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.480300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.480474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.480505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.480688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.480719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.480861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.480899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.481064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.481096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.481238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.481270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.481420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.481451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.481639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.481671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.481839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.481880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.482083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.482133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.482348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.482380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.960 [2024-07-13 13:49:06.482528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.960 [2024-07-13 13:49:06.482563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.960 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.482735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.482766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.482948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.482980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.483128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.483159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.483339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.483386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.483592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.483623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.483760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.483791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.483976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.484009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.484262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.484294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.484486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.484518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.484666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.484697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.484875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.484907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.485073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.485108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.485275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.485310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.485512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.485544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.485712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.485744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.485913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.485946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.486080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.486112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.486290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.486322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.486525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.486557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.486722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.486753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.486927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.486959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.487179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.487214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.487425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.487456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.487645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.487680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.487845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.487908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.488110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.488142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.488303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.488338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.488527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.488564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.488776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.488811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.489020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.489053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.489277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.489312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.489482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.489514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.489672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.489707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.489921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.489968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.490194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.490226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.490459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.490490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.961 qpair failed and we were unable to recover it. 00:37:31.961 [2024-07-13 13:49:06.490714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.961 [2024-07-13 13:49:06.490748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.490946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.490978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.491112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.491144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.491329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.491368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.491602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.491633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.491861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.491901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.492131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.492162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.492308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.492339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.492566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.492601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.492803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.492834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.493036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.493068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.493266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.493301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.493484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.493518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.493696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.493728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.493899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.493931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.494155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.494187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.494392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.494424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.494632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.494667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.494884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.494919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.495088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.495119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.495322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.495357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.495555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.495586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.495758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.495789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.495999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.496034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.496224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.496259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.496451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.496482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.496673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.496708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.496937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.496969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.497123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.497156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.497329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.497360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.497530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.497562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.497773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.497804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.497969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.498006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.498196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.498231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.498394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.498425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.498643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.498678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.498918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.498954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.499147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.499179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.499406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.962 [2024-07-13 13:49:06.499438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.962 qpair failed and we were unable to recover it. 00:37:31.962 [2024-07-13 13:49:06.499621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.499652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.499823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.499854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.500054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.500089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.500252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.500287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.500479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.500514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.500659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.500691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.500838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.500874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.501018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.501050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.501265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.501300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.501474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.501506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.501652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.501684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.501904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.501940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.502129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.502164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.502388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.502419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.502615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.502650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.502814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.502849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.503030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.503062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.503214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.503245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.503396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.503428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.503624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.503656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.503823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.503858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.504056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.504088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.504232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.504274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.504449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.504481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.504695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.504730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.504923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.504955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.505105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.505137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.505307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.505357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.505553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.505585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.505725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.505757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.505930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.505966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.506139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.506170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.506369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.506404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.506617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.506653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.963 [2024-07-13 13:49:06.506850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.963 [2024-07-13 13:49:06.506886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.963 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.507062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.507094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.507305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.507336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.507513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.507546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.507741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.507776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.507999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.508031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.508215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.508246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.508446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.508481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.508697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.508732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.508900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.508933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.509096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.509136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.509323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.509358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.509545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.509576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.509774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.509809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.510013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.510045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.510183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.510215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.510433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.510468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.510687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.510722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.510900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.510933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.511154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.511189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.511412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.511447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.511655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.511687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.511859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.511910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.512102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.512136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.512306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.512338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.512535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.512572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.512764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.512799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.513029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.513062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.513221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.513253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.964 [2024-07-13 13:49:06.513425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.964 [2024-07-13 13:49:06.513477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.964 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.513669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.513701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.513854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.513892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.514037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.514069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.514242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.514273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.514442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.514477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.514632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.514667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.514871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.514903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.515077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.515111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.515294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.515329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.515562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.515594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.515768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.515804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.515992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.516028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.516197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.516229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.516412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.516447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.516661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.516696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.516871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.516903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.517056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.517087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.517254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.517286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.517468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.517500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.517645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.517677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.517879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.517920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.518144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.518176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.518397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.518445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.518681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.518716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.518914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.518947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.519110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.519145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.519313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.519348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.519567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.519598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.519790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.519825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.520040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.520073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.520246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.520278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.520498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.520533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.520725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.520760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.520954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.520987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.521179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.965 [2024-07-13 13:49:06.521215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.965 qpair failed and we were unable to recover it. 00:37:31.965 [2024-07-13 13:49:06.521407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.521442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.521631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.521663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.521847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.521888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.522054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.522089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.522255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.522287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.522513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.522548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.522742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.522777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.522996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.523028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.523204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.523235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.523454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.523489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.523656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.523687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.523897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.523933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.524122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.524157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.524352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.524383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.524579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.524614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.524773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.524807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.524978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.525010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.525188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.525220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.525395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.525427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.525601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.525632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.525862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.525902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.526117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.526152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.526342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.526374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.526542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.526577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.526799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.526830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.526983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.527019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.527232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.527268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.527468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.527503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.527684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.527719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.527944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.527976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.528126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.528174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.528367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.528399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.528573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.528604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.528767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.528798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.528966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.528999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.529167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.529202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.529358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.966 [2024-07-13 13:49:06.529393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.966 qpair failed and we were unable to recover it. 00:37:31.966 [2024-07-13 13:49:06.529610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.529642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.529839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.529879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.530082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.530117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.530309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.530341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.530510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.530545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.530724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.530759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.530947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.530979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.531176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.531211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.531394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.531430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.531623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.531654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.531873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.531909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.532107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.532143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.532345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.532376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.532522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.532554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.532719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.532760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.532968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.533001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.533197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.533232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.533445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.533480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.533640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.533671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.533856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.533897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.534115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.534150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.534344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.534376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.534575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.534610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.534794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.534829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.534995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.535027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.535169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.535219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.535386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.535421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.535635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.535666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.535889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.535925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.536120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.536155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.536379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.536410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.536607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.536642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.536838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.536888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.537089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.537120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.537284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.537319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.537505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.537540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.537715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.537746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.967 [2024-07-13 13:49:06.537925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.967 [2024-07-13 13:49:06.537957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.967 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.538176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.538211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.538374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.538405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.538573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.538608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.538775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.538807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.539016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.539048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.539239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.539274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.539457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.539491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.539667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.539698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.539883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.539931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.540148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.540183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.540359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.540390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.540585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.540617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.540807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.540843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.541044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.541075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.541240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.541276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.541473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.541509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.541727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.541758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.541987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.542019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.542202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.542233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.542481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.542512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.542689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.542720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.542874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.542923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.543086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.543117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.543316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.543364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.543559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.543590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.543781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.543816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.544032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.544064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.544250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.544285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.544485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.544516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.544684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.544718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.544909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.544949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.545127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.545158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.545343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.545378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.545568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.545604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.545800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.968 [2024-07-13 13:49:06.545832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.968 qpair failed and we were unable to recover it. 00:37:31.968 [2024-07-13 13:49:06.546002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.546038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.546245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.546277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.546450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.546481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.546681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.546728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.546925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.546961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.547139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.547180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.547343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.547378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.547546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.547582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.547774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.547805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.547985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.548021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.548182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.548217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.548413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.548445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.548597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.548629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.548795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.548826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.549035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.549067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.549238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.549273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.549487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.549518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.549718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.549749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.549914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.549946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.550140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.550176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.550365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.550396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.550583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.550617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.550777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.550812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.551014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.551046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.551244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.551279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.551504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.551539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.551768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.551803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.552039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.552071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.552247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.552283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.552449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.552481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.552651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.552686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.969 [2024-07-13 13:49:06.552884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.969 [2024-07-13 13:49:06.552920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.969 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.553119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.553151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.553349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.553381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.553569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.553605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.553776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.553811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.554016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.554052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.554250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.554281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.554479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.554510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.554691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.554725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.554907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.554943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.555132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.555163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.555368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.555403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.555588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.555623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.555841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.555879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.556078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.556113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.556299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.556334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.556519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.556550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.556740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.556777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.556974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.557010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.557183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.557216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.557404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.557440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.557638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.557669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.557843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.557880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.558074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.558110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.558302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.558338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.558553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.558585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.558762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.558799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.558991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.559027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.559193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.559224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.559420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.559455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.559649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.559681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.559924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.559956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.560132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.560183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.560347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.560384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.560554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.560587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.560728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.560760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.560955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.560987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.561184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.970 [2024-07-13 13:49:06.561215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.970 qpair failed and we were unable to recover it. 00:37:31.970 [2024-07-13 13:49:06.561404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.561450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.561615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.561651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.561834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.561870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.562056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.562091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.562282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.562318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.562543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.562574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.562769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.562809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.563016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.563048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.563243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.563274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.563467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.563502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.563729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.563764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.563998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.564031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.564232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.564264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.564463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.564494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.564706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.564738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.564910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.564942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.565113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.565146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.565335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.565367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.565583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.565619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.565781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.565818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.565988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.566020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.566179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.566214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.566426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.566461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.566654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.566685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.566837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.566873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.567051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.567083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.567297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.567329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.567471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.567522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.567681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.567716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.567889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.567921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.568085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.568116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.568312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.568348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.568521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.568553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.568745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.568784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.568981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.569017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.569208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.569240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.569457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.569492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.569675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.971 [2024-07-13 13:49:06.569710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.971 qpair failed and we were unable to recover it. 00:37:31.971 [2024-07-13 13:49:06.569925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.569958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.570163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.570198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.570392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.570428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.570649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.570681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.570887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.570923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.571076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.571111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.571277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.571309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.571498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.571532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.571711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.571743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.571922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.571955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.572175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.572210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.572398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.572433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.572626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.572658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.572849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.572891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.573079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.573114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.573307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.573339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.573507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.573542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.573745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.573776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.573942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.573975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.574164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.574200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.574386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.574421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.574573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.574605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.574790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.574821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.574998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.575030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.575198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.575230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.575447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.575482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.575708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.575750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.575903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.575936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.576116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.576165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.576329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.576364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.576541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.576572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.576763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.576797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.577010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.577046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.577269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.577301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.577497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.577532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.577723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.577763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.972 qpair failed and we were unable to recover it. 00:37:31.972 [2024-07-13 13:49:06.577960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.972 [2024-07-13 13:49:06.577992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.578189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.578224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.578414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.578449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.578641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.578673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.578827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.578859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.579085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.579120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.579293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.579324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.579499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.579530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.579710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.579742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.579931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.579973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.580161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.580201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.580412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.580447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.580659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.580690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.580918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.580954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.581147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.581182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.581345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.581377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.581565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.581600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.581756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.581791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.581981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.582014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.582233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.582268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.582451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.582486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.582681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.582713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.582890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.582926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.583114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.583150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.583339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.583370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.583566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.583601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.583785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.583817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.583996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.584028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.973 qpair failed and we were unable to recover it. 00:37:31.973 [2024-07-13 13:49:06.584208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.973 [2024-07-13 13:49:06.584243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.584461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.584496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.584684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.584715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.584904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.584939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.585130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.585165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.585388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.585420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.585587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.585622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.585808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.585841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.586018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.586049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.586212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.586247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.586437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.586472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.586637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.586673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.586829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.586869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.587089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.587122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.587353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.587384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.587581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.587616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.587838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.587875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.588048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.588080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.588241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.588277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.588461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.588496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.588660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.588691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.588845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.588882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.589085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.589136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.589304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.589335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.589480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.589512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.589709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.589743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.589964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.590006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.590173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.590208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.590427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.590462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.590660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.590691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.590891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.590927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.591088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.591124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.591321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.591353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.591519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.591554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.591715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.591750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.591969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.592001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.974 qpair failed and we were unable to recover it. 00:37:31.974 [2024-07-13 13:49:06.592199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.974 [2024-07-13 13:49:06.592234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.592397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.592432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.592661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.592693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.592891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.592928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.593116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.593151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.593370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.593401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.593638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.593670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.593846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.593882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.594067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.594099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.594315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.594349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.594506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.594541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.594730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.594765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.594959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.594991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.595212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.595247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.595441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.595473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.595640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.595675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.595864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.595904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.596068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.596099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.596239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.596289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.596485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.596517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.596720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.596751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.596936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.596972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.597168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.597200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.597398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.597429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.597610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.597645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.597863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.597914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.598110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.598142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.598339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.598374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.598558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.598593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.598773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.598804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.598976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.599026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.599219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.599255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.599402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.599436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.599633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.975 [2024-07-13 13:49:06.599668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.975 qpair failed and we were unable to recover it. 00:37:31.975 [2024-07-13 13:49:06.599827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.599862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.600036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.600068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.600240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.600271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.600478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.600514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.600677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.600709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.600878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.600913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.601098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.601133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.601315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.601347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.601576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.601611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.601798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.601834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.602018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.602050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.602240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.602275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.602491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.602523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.602721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.602756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.602954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.602986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.603172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.603207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.603411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.603443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.603639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.603674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.603903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.603936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.604115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.604147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.604318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.604364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.604581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.604624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.604800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.604833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.605004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.605040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.605259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.605294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.605486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.605517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.605715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.605761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.605966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.605999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.606167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.606199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.606366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.606401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.606549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.606585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.606767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.606799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.606983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.607019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.607229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.607264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.607422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.976 [2024-07-13 13:49:06.607453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.976 qpair failed and we were unable to recover it. 00:37:31.976 [2024-07-13 13:49:06.607650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.607687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.607853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.607890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.608043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.608075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.608279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.608315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.608517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.608549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.608699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.608731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.608877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.608910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.609094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.609125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.609338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.609370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.609527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.609562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.609750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.609786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.609959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.609992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.610181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.610216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.610424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.610456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.610598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.610630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.610823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.610858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.611062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.611094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.611266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.611298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.611463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.611498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.611685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.611721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.611887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.611919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.612119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.612154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.612334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.612369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.612566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.612597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.612760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.612796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.613022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.613058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.613257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.613293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.613487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.613522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.613727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.613758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.613936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.613969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.614158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.614193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.614384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.614419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.614611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.614642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.614792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.614840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.615030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.615066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.615227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.615259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.615445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.615480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.615672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.615707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.615930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.615962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.977 qpair failed and we were unable to recover it. 00:37:31.977 [2024-07-13 13:49:06.616159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.977 [2024-07-13 13:49:06.616194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.616388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.616423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.616620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.616652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.616872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.616908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.617139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.617170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.617344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.617376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.617541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.617576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.617773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.617808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.618010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.618043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.618202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.618237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.618398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.618445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.618641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.618673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.618875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.618924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.619109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.619144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.619373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.619405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.619564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.619600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.619788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.619822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.620020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.620051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.620226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.620258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.620451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.620488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.620683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.620715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.620895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.620945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.621120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.621152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.621363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.621395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.621613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.621648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.621844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.621885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.622055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.622086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.622274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.622314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.978 [2024-07-13 13:49:06.622523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.978 [2024-07-13 13:49:06.622555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.978 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.622726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.622758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.622973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.623009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.623197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.623232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.623426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.623458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.623653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.623688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.623873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.623905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.624105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.624137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.624357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.624392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.624570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.624604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.624769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.624800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.624990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.625026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.625212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.625248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.625441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.625473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.625622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.625653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.625873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.625909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.626076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.626108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.626273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.626308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.626488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.626523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.626715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.626747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.626941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.626977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.627148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.627183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.627378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.627409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.627599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.627641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.627851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.627889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.628037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.628068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.628267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.628302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.628494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.628529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.628717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.628748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.628941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.628977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.629164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.629199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.629362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.629394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.629588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.629624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.629815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.979 [2024-07-13 13:49:06.629851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.979 qpair failed and we were unable to recover it. 00:37:31.979 [2024-07-13 13:49:06.630047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.630079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.630273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.630307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.630497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.630532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.633039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.633071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.633267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.633302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.633522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.633562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.633792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.633823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.634052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.634088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.634273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.634306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.634513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.634545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.634909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.634945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.635134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.635169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.635384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.635426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.635623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.635658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.635828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.635863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.636062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.636094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.636300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.636335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.636494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.636529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.636725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.636756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.636963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.636999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.637192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.637227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.637430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.637461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.637637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.637672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.637885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.637921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.638095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.638127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.638348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.638383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.638579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.638614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.638812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.638843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.639024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.639055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.639218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.639253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.639484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.639517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.639691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.639722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.639931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.639968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.640192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.640224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.640416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.640451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.980 [2024-07-13 13:49:06.640692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.980 [2024-07-13 13:49:06.640728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.980 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.640962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.640995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.641168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.641200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.641363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.641399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.641571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.641603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.641792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.641827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.642000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.642033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.642181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.642213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.642375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.642410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.642597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.642632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.642851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.642902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.643096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.643131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.643293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.643328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.643521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.643552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.643748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.643784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.643974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.644009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.644216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.644248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.644432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.644467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.644667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.644699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.644833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.644869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.645018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.645049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.645211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.645243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.645488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.645519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.645758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.645789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.645988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.646020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.646168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.646199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.646422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.646457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.646647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.646684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.646857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.646899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.647069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.647101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.647241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.647273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.647472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.647503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.647702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.647737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.647924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.647956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.648154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.648186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.648376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.648411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.648626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.648661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.981 [2024-07-13 13:49:06.648849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.981 [2024-07-13 13:49:06.648886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.981 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.649090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.649126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.649317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.649353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.649571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.649603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.649762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.649808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.650045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.650078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.650253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.650285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.650480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.650515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.650738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.650769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.650941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.650973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.651172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.651207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.651432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.651464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.651643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.651674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.651971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.652010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.652167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.652199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.652387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.652419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.652638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.652673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.652856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.652893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.653090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.653121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.653321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.653356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.653546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.653582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.653779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.653810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.654019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.654055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.654248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.654283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.654501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.654533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.654756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.654791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.654991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.655027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.655234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.655265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.655418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.655450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.655645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.655680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.655860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.655897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.656091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.656128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.656330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.656365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.656531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.656562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.656739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.656770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.656965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.657001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.982 [2024-07-13 13:49:06.657234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.982 [2024-07-13 13:49:06.657266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.982 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.657488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.657524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.657711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.657746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.657923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.657955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.658137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.658169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.658334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.658365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.658575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.658607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.658769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.658804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.659038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.659075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.659250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.659283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.659497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.659529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.659706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.659737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.659945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.659977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.660146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.660197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.660389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.660425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.660646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.660678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.660860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.660896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.661113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.661153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.661354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.661386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.661562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.661594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.661766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.661798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.661998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.662031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.662200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.662235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.662423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.662459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.662644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.662676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.662833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.662874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.663084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.663115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.663253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.663285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.983 qpair failed and we were unable to recover it. 00:37:31.983 [2024-07-13 13:49:06.663432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.983 [2024-07-13 13:49:06.663464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.663645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.663677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.663922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.663954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.664195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.664227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.664405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.664447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.664647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.664679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.664908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.664940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.665091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.665122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.665296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.665328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.665546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.665581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.665771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.665807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.666009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.666041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.666266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.666303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.666501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.666538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.666815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.666851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.667032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.667064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.667298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.667352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.667570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.667604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.667783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.667816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.668035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.668068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.668256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.668292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.668470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.668505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.668682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.668730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.668919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.668953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.669106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.669161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.669480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.669538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.669759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.669790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.669984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.670018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:31.984 [2024-07-13 13:49:06.670210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.984 [2024-07-13 13:49:06.670245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:31.984 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.670418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.670456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.670606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.670639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.670858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.670925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.671136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.671179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.671414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.671465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.671681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.671729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.671953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.671997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.672263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.672306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.672481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.672540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.672839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.672918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.673100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.673143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.673423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.673466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.673663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.673700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.673856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.673895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.674077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.674109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.674299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.674333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.674575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.674608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.674810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.674858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.675058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.675090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.264 qpair failed and we were unable to recover it. 00:37:32.264 [2024-07-13 13:49:06.675295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.264 [2024-07-13 13:49:06.675331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.675634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.675697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.675906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.675939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.676116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.676167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.676364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.676401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.676563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.676595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.676794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.676830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.677009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.677053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.677224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.677259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.677480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.677515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.677726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.677762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.677970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.678004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.678161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.678193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.678372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.678414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.678591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.678623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.678785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.678820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.678999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.679031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.679202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.679233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.679408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.679442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.679662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.679698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.679927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.679959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.680118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.680172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.680336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.680372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.680589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.680621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.680803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.680836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.680995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.681028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.681207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.681239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.681425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.681456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.681661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.681694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.681847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.681888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.265 [2024-07-13 13:49:06.682083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.265 [2024-07-13 13:49:06.682116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.265 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.682264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.682304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.682485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.682525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.682725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.682757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.682907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.682940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.683089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.683121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.683327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.683359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.683531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.683563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.683810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.683845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.684038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.684070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.684272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.684308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.684495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.684528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.684686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.684718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.684893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.684926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.685078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.685110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.685299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.685331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.685511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.685543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.685741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.685774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.686036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.686069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.686227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.686259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.686448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.686479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.686659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.686692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.686905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.686937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.687112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.687145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.687315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.687347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.687508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.687540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.687722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.687754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.687928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.687961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.688117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.688150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.688302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.266 [2024-07-13 13:49:06.688334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.266 qpair failed and we were unable to recover it. 00:37:32.266 [2024-07-13 13:49:06.688511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.688550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.688719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.688755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.688963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.688995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.689170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.689206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.689424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.689460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.689672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.689710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.689902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.689938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.690113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.690144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.690351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.690383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.690533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.690565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.690742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.690774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.690947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.690979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.691192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.691224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.691430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.691463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.691650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.691682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.691870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.691902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.692095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.692127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.692317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.692359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.692564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.692607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.692801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.692843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.693048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.693081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.693269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.693301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.693465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.693502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.693720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.693754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.693900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.693933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.694088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.694122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.694314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.694346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.694516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.694548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.694730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.694767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.694945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.694977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.695115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.267 [2024-07-13 13:49:06.695147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.267 qpair failed and we were unable to recover it. 00:37:32.267 [2024-07-13 13:49:06.695314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.695346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.695511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.695543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.695742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.695774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.695930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.695963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.696131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.696163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.696350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.696382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.696554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.696586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.696762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.696795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.696949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.696982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.697149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.697181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.697351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.697383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.697559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.697592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.697766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.697798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.697959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.697993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.698203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.698235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.698400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.698442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.698640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.698671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.698879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.698931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.699135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.699167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.699344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.699377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.699552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.699585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.699769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.699801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.699981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.700014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.700180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.700212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.700421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.700478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.700647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.700690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.700863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.700900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.701048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.701081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.701259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.701292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.268 [2024-07-13 13:49:06.701467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.268 [2024-07-13 13:49:06.701499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.268 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.701680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.701713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.701859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.701908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.702108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.702150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.702327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.702359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.702541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.702573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.702720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.702752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.702899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.702932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.703132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.703168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.703353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.703385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.703554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.703586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.703763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.703795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.703959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.703992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.704192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.704226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.704398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.704430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.704603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.704635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.704840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.704877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.705050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.705082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.705272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.705307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.705492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.705528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.705765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.705797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.705983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.706049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.706238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.706270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.706421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.706452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.706598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.706631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.706772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.706804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.706977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.707010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.707171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.707203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.707395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.707432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.269 [2024-07-13 13:49:06.707595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.269 [2024-07-13 13:49:06.707627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.269 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.707799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.707831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.708019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.708051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.708256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.708288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.708463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.708496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.708713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.708745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.708928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.708963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.709121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.709153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.709358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.709390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.709560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.709593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.709781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.709817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.710055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.710088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.710292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.710324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.710514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.710546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.710731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.710764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.710954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.710987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.711164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.711196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.711377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.711414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.711630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.711662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.711846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.711891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.712065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.712101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.712290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.712322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.712499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.270 [2024-07-13 13:49:06.712531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.270 qpair failed and we were unable to recover it. 00:37:32.270 [2024-07-13 13:49:06.712710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.712747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.712950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.712983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.713160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.713191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.713337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.713369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.713551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.713584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.713781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.713816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.714047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.714080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.714267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.714299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.714475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.714510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.714698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.714734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.714908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.714941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.715132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.715165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.715350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.715384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.715535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.715566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.715770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.715802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.716004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.716037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.716194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.716226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.716422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.716454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.716663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.716698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.716887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.716925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.717085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.717118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.717316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.717367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.717562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.717597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.717823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.717858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.718084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.718117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.718289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.718322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.718466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.718499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.718671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.718702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.718924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.718957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.719123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.719155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.719298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.719329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.719525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.719557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.271 qpair failed and we were unable to recover it. 00:37:32.271 [2024-07-13 13:49:06.719757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.271 [2024-07-13 13:49:06.719794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.719984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.720026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.720170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.720202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.720407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.720439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.720642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.720678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.720851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.720889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.721037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.721070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.721266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.721297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.721473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.721505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.721647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.721678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.721814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.721846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.722064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.722096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.722272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.722304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.722450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.722481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.722680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.722711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.722883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.722916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.723071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.723104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.723303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.723335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.723501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.723533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.723705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.723737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.723919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.723951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.724135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.724168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.724341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.724374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.724572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.724604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.724784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.724817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.724977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.725010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.725227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.725276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.725463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.725518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.725719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.725771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.725947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.725982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.726140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.726174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.726417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.726468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.272 qpair failed and we were unable to recover it. 00:37:32.272 [2024-07-13 13:49:06.726767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.272 [2024-07-13 13:49:06.726830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.727023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.727056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.727262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.727312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.727568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.727624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.727800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.727833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.728019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.728052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.728262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.728297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.728500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.728551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.728733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.728768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.729005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.729055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.729277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.729332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.729528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.729591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.729752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.729790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.729982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.730034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.730237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.730288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.730561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.730618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.730792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.730829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.731059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.731111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.731350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.731400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.731612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.731667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.731899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.731950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.732159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.732215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.732401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.732453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.732769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.732827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.733035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.733086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.733341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.733393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.733633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.733695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.733881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.733915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.734121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.734155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.273 [2024-07-13 13:49:06.734395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.273 [2024-07-13 13:49:06.734446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.273 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.734751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.734812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.735003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.735037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.735257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.735307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.735515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.735568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.735746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.735778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.735955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.735990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.736187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.736237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.736438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.736490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.736665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.736697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.736924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.736973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.737184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.737234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.737401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.737452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.737660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.737718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.737923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.737975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.738186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.738237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.738470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.738521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.738732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.738769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.738948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.739012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.739226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.739278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.739514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.739566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.739739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.739771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.739969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.740022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.740206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.740274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.740536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.740587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.740789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.740828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.741076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.741130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.741312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.741363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.741588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.741622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.741807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.741839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.742048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.274 [2024-07-13 13:49:06.742086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.274 qpair failed and we were unable to recover it. 00:37:32.274 [2024-07-13 13:49:06.742307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.742357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.742554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.742606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.742774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.742809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.743020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.743071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.743260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.743310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.743512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.743568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.743751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.743783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.743969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.744020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.744227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.744275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.744522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.744572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.744747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.744779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.744990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.745041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.745211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.745260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.745492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.745543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.745718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.745750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.745945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.745998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.746205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.746238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.746469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.746520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.746720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.746753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.746978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.747029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.747223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.747275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.747468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.747519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.747673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.747705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.747902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.275 [2024-07-13 13:49:06.747958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.275 qpair failed and we were unable to recover it. 00:37:32.275 [2024-07-13 13:49:06.748122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.748173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.748406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.748457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.748630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.748663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.748847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.748884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.749085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.749136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.749326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.749377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.749568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.749629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.749840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.749877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.750074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.750129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.750307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.750357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.750526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.750576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.750751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.750783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.750980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.751028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.751227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.751277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.751504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.751555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.751727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.751759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.751923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.751975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.752146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.752196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.752394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.752443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.752618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.752651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.752821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.752854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.753007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.753040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.753242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.753293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.753500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.753551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.753754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.753786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.753985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.754036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.754239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.754291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.754461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.754515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.754687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.754719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.754915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.754951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.755157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.276 [2024-07-13 13:49:06.755206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.276 qpair failed and we were unable to recover it. 00:37:32.276 [2024-07-13 13:49:06.755430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.755480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.755646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.755677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.755860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.755901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.756123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.756174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.756353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.756403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.756627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.756677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.756884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.756917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.757111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.757164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.757362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.757411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.757606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.757654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.757859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.757897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.758095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.758145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.758341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.758391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.758588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.758638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.758819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.758852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.759023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.759072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.759247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.759297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.759528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.759582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.759735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.759768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.759990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.760041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.760225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.760258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.760485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.760535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.760691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.760723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.760941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.760992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.761153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.761204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.761393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.761443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.761646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.761684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.761884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.761916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.762106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.277 [2024-07-13 13:49:06.762154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.277 qpair failed and we were unable to recover it. 00:37:32.277 [2024-07-13 13:49:06.762355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.762404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.762569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.762605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.762802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.762836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.763050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.763101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.763307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.763356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.763581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.763632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.763814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.763848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.764056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.764106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.764269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.764320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.764554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.764605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.764807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.764848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.765081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.765132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.765326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.765377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.765604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.765655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.765861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.765901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.766127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.766177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.766366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.766416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.766617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.766667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.766847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.766894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.767097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.767147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.767379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.767429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.767624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.767673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.767852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.767892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.768093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.768126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.768312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.768361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.768558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.768608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.768749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.768780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.768955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.768987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.769177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.769231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.769438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.769486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.769671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.769704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.769908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.278 [2024-07-13 13:49:06.769959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.278 qpair failed and we were unable to recover it. 00:37:32.278 [2024-07-13 13:49:06.770123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.770175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.770375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.770424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.770607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.770639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.770812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.770843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.771040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.771089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.771270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.771320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.771508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.771557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.771728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.771760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.771956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.772006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.772207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.772257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.772465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.772515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.772692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.772724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.772880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.772913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.773095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.773127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.773298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.773347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.773520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.773552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.773726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.773759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.773930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.773982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.774123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.774156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.774364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.774416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.774589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.774621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.774802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.774836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.775070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.775121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.775318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.775368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.775570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.775620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.775799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.775831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.776038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.776087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.776312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.776362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.776560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.776610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.776760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.776793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.777000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.279 [2024-07-13 13:49:06.777050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.279 qpair failed and we were unable to recover it. 00:37:32.279 [2024-07-13 13:49:06.777239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.777291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.777483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.777531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.777716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.777749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.777977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.778028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.778217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.778270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.778435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.778491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.778674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.778707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.778879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.778911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.779139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.779187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.779388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.779438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.779620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.779653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.779821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.779877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.780074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.780124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.780323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.780375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.780580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.780611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.780785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.780816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.781051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.781101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.781246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.781278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.781450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.781500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.781676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.781709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.781893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.781942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.782148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.782181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.782380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.782429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.782572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.782604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.782779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.782810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.783008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.783058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.783231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.783282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.783510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.783559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.783701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.280 [2024-07-13 13:49:06.783734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.280 qpair failed and we were unable to recover it. 00:37:32.280 [2024-07-13 13:49:06.783929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.783981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.784147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.784196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.784430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.784481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.784658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.784690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.784912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.784947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.785145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.785176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.785409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.785459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.785634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.785665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.785861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.785924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.786155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.786206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.786392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.786441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.786611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.786643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.786841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.786882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.787055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.787106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.787309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.787358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.787530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.787579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.787782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.787818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.788058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.788110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.788292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.788341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.788517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.788566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.788746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.788777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.788959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.788993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.789199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.789250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.281 qpair failed and we were unable to recover it. 00:37:32.281 [2024-07-13 13:49:06.789449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.281 [2024-07-13 13:49:06.789499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.789669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.789701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.789909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.789943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.790097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.790146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.790332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.790365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.790567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.790599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.790805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.790837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.791049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.791100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.791302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.791351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.791580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.791617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.791825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.791858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.792062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.792112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.792296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.792330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.792553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.792604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.792757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.792791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.793004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.793056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.793251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.793302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.793491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.793541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.793751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.793783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.793998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.794032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.794281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.794334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.794532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.794570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.794771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.794803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.794955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.794989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.795152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.795187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.795349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.795385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.795629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.795664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.795882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.795933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.796080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.796112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.796429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.796488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.796702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.796737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.796946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.796978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.797126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.797158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.797375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.282 [2024-07-13 13:49:06.797416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.282 qpair failed and we were unable to recover it. 00:37:32.282 [2024-07-13 13:49:06.797615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.797650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.797846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.797885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.798040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.798071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.798269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.798304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.798606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.798674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.798895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.798945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.799128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.799178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.799444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.799500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.799717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.799752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.799935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.799967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.800167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.800199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.800392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.800427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.800578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.800613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.800860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.800918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.801072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.801104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.801257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.801307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.801496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.801532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.801745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.801780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.801976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.802009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.802156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.802188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.802375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.802410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.802631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.802666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.802876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.802909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.803087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.803118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.803314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.803351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.803689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.803754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.803986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.804020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.804177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.804209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.804517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.804574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.804788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.804824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.805024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.805056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.805242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.805289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.805504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.805557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.805777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.805811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.283 [2024-07-13 13:49:06.806033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.283 [2024-07-13 13:49:06.806077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.283 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.806266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.806316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.806475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.806524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.806827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.806884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.807054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.807086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.807282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.807332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.807629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.807687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.807878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.807928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.808106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.808138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.808419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.808476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.808692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.808727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.808932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.808965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.809160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.809207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.809405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.809459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.809691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.809742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.809923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.809956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.810157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.810209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.810387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.810438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.810745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.810799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.811004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.811055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.811244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.811294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.811536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.811590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.811787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.811819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.812001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.812050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.812277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.812342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.812544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.812582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.812750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.812785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.813006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.813039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.813241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.813276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.813463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.813498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.813690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.813725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.813891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.813940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.814088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.284 [2024-07-13 13:49:06.814124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.284 qpair failed and we were unable to recover it. 00:37:32.284 [2024-07-13 13:49:06.814414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.814472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.814664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.814700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.814878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.814910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.815085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.815117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.815291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.815328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.815520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.815556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.815731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.815762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.815934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.815967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.816108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.816159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.816361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.816456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.816657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.816692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.816893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.816943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.817115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.817147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.817355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.817390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.817544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.817579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.817776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.817808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.817967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.818000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.818190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.818225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.818437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.818491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.818685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.818721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.818930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.818963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.819179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.819226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.819456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.819509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.819723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.819774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.819950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.819984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.820209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.820260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.820469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.820518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.820773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.820839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.821027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.285 [2024-07-13 13:49:06.821060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.285 qpair failed and we were unable to recover it. 00:37:32.285 [2024-07-13 13:49:06.821292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.821341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.821566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.821618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.821790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.821823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.822033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.822066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.822268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.822319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.822519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.822570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.822742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.822774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.822966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.823016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.823237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.823287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.823459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.823509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.823708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.823745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.823948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.823999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.824196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.824248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.824476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.824527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.824702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.824735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.824937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.824989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.825216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.825264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.825462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.825512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.825713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.825745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.825892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.825925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.826156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.826207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.826432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.826482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.826660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.826692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.826874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.826907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.827118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.827167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.827347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.827398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.827601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.827652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.827852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.827893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.828063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.828113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.286 [2024-07-13 13:49:06.828345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.286 [2024-07-13 13:49:06.828394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.286 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.828588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.828639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.828839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.828880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.829023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.829055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.829212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.829245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.829444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.829494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.829693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.829743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.829888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.829921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.830133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.830184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.830385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.830443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.830642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.830675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.830881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.830914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.831087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.831136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.831361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.831418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.831760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.831815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.831980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.832012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.832211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.832260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.832426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.832479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.832665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.832697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.832909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.832941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.833123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.833174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.833342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.833403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.833604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.833654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.833825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.833857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.834069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.834121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.834314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.834363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.287 qpair failed and we were unable to recover it. 00:37:32.287 [2024-07-13 13:49:06.834565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.287 [2024-07-13 13:49:06.834616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.834814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.834846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.835062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.835112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.835359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.835411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.835608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.835646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.835813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.835861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.836057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.836095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.836274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.836323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.836505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.836541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.836722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.836754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.836918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.836951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.837089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.837121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.837399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.837434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.837651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.837692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.837931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.837963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.838115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.838146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.838300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.838332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.838507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.838539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.838742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.838777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.839007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.839040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.839183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.839214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.839403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.839438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.839631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.839666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.839861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.839919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.840119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.840151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.840347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.840383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.840576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.840611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.840824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.840859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.841028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.841060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.841251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.841286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.841600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.841665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.841882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.288 [2024-07-13 13:49:06.841931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.288 qpair failed and we were unable to recover it. 00:37:32.288 [2024-07-13 13:49:06.842085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.842116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.842324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.842356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.842535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.842570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.842762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.842802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.842980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.843012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.843159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.843201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.843397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.843446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.843669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.843723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.843959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.843991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.844172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.844204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.844487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.844543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.844754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.844789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.844992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.845025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.845175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.845207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.845374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.845424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.845638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.845673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.845879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.845911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.846089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.846120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.846326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.846361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.846689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.846755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.846982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.847015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.847188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.847223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.847456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.847488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.847708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.847744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.847949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.847981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.848132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.848165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.848364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.848399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.848566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.848615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.848859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.848896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.849070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.849103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.849329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.849364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.849546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.849581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.849768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.849803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.289 qpair failed and we were unable to recover it. 00:37:32.289 [2024-07-13 13:49:06.850006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.289 [2024-07-13 13:49:06.850038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.850210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.850241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.850406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.850441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.850639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.850674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.850841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.850880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.851056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.851087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.851289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.851325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.851649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.851712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.851913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.851945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.852092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.852124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.852309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.852345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.852539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.852574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.852766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.852802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.853008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.853040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.853232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.853268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.853452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.853487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.853704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.853735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.853884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.853916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.854110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.854158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.854339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.854370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.854527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.854561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.854729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.854764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.854943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.854975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.855136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.855171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.855369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.855404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.855597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.855628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.855843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.855883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.856038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.856073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.856248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.856279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.856440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.856474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.856688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.856723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.856923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.856955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.857129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.290 [2024-07-13 13:49:06.857165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.290 qpair failed and we were unable to recover it. 00:37:32.290 [2024-07-13 13:49:06.857359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.857394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.857559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.857590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.857815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.857850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.858030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.858080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.858303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.858335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.858486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.858518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.858671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.858722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.858938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.858971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.859169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.859204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.859402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.859433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.859606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.859638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.859809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.859841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.860028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.860060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.860228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.860259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.860484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.860519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.860706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.860742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.860907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.860940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.861089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.861125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.861338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.861370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.861545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.861577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.861742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.861777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.861990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.862026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.862203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.862235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.862419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.862454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.862653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.862685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.862888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.862927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.863149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.863185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.863352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.863387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.291 [2024-07-13 13:49:06.863603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.291 [2024-07-13 13:49:06.863635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.291 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.863864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.863902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.864078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.864113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.864299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.864330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.864526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.864561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.864763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.864798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.864993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.865025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.865218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.865253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.865419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.865454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.865611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.865643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.865842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.865882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.866094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.866126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.866286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.866318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.866489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.866521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.866708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.866744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.866940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.866973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.867168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.867202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.867392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.867427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.867647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.867678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.867855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.867892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.868065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.868097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.868301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.868333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.868501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.868536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.868723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.868755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.868938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.868970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.869161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.869196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.869365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.869403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.292 [2024-07-13 13:49:06.869598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.292 [2024-07-13 13:49:06.869629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.292 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.869833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.869873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.870113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.870150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.870355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.870386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.870602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.870637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.870818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.870854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.871042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.871073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.871239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.871274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.871445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.871481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.871655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.871687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.871885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.871917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.872109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.872144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.872338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.872380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.872603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.872639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.872832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.872875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.873067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.873098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.873277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.873312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.873498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.873533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.873752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.873783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.873934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.873966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.874145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.874176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.874323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.874355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.874495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.874526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.874743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.874778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.874957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.874990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.875179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.875215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.875405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.875440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.875662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.875693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.875895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.875931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.876117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.876149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.293 qpair failed and we were unable to recover it. 00:37:32.293 [2024-07-13 13:49:06.876357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.293 [2024-07-13 13:49:06.876389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.876580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.876615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.876800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.876835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.877028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.877060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.877271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.877307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.877493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.877528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.877723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.877755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.877949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.877984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.878138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.878173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.878371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.878403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.878603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.878639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.878828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.878864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.879049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.879087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.879248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.879283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.879449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.879484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.879688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.879721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.879897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.879934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.880121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.880157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.880377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.880408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.880583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.880618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.880805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.880840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.881016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.881049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.881224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.881256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.881435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.881471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.881688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.881720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.881890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.881926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.882092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.882127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.882322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.882354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.882546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.882581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.882774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.294 [2024-07-13 13:49:06.882809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.294 qpair failed and we were unable to recover it. 00:37:32.294 [2024-07-13 13:49:06.882987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.883019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.883186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.883218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.883420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.883456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.883652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.883683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.883909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.883942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.884146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.884194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.884391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.884422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.884601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.884633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.884807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.884839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.885059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.885107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.885344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.885398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.885550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.885584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.885742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.885777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.885927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.885960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.886159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.886210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.886569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.886622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.886779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.886813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.887007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.887059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.887278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.887329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.887524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.887557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.887740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.887774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.887978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.888029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.888222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.888278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.888468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.888519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.888705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.888737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.888931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.888982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.889179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.889214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.889428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.889479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.889634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.889665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.889816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.889848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.890024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.295 [2024-07-13 13:49:06.890074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.295 qpair failed and we were unable to recover it. 00:37:32.295 [2024-07-13 13:49:06.890285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.890339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.890545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.890577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.890756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.890789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.890980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.891031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.891230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.891282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.891536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.891587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.891787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.891819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.892052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.892086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.892287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.892343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.892583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.892634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.892787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.892820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.893016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.893067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.893253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.893303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.893525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.893576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.893759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.893792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.893990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.894041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.894244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.894295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.894668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.894721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.894963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.895002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.895166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.895202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.895376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.895424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.895656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.895692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.895938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.895971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.896174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.896209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.896376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.896423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.896643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.896679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.896914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.896949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.897152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.897203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.897418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.897469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.897766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.897798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.897963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.897996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.898198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.898262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.898493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.898561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.898706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.296 [2024-07-13 13:49:06.898739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.296 qpair failed and we were unable to recover it. 00:37:32.296 [2024-07-13 13:49:06.898951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.899004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.899274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.899325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.899506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.899544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.899716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.899751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.899976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.900008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.900172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.900207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.900366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.900402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.900588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.900623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.900818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.900853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.901039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.901071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.901273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.901309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.901481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.901516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.901705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.901740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.901986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.902033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.902213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.902265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.902440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.902489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.902694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.902744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.902926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.902960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.903156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.903206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.903535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.903589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.903772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.903805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.903980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.904013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.904214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.904251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.904436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.904471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.904673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.904709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.904877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.904909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.905100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.905132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.905345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.297 [2024-07-13 13:49:06.905380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.297 qpair failed and we were unable to recover it. 00:37:32.297 [2024-07-13 13:49:06.905589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.905624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.905814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.905849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.906035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.906068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.906398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.906468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.906706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.906758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.906917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.906951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.907145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.907196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.907425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.907475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.907642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.907694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.907852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.907901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.908104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.908156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.908378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.908429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.908650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.908683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.908884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.908918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.909139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.909188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.909359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.909409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.909609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.909659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.909825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.909858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.910063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.910113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.910278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.910330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.910526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.910576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.910788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.910820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.910989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.298 [2024-07-13 13:49:06.911039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.298 qpair failed and we were unable to recover it. 00:37:32.298 [2024-07-13 13:49:06.911270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.911320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.911702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.911771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.912034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.912072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.912269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.912305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.912623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.912686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.912917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.912949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.913099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.913130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.913303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.913338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.913526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.913561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.913726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.913762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.913961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.913995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.914195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.914247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.914472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.914522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.914848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.914921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.915120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.915151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.915329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.915364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.915530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.915565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.915728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.915764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.915942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.915974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.916145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.916176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.916382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.916418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.916618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.916653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.916855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.916895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.917097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.917146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.917370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.917405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.917644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.917699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.917905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.917943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.918159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.918195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.918357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.918389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.918579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.918614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.918803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.918838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.919011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.919043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.919245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.919280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.919483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.299 [2024-07-13 13:49:06.919518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.299 qpair failed and we were unable to recover it. 00:37:32.299 [2024-07-13 13:49:06.919732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.919768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.919967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.919999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.920194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.920230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.920422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.920454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.920646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.920681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.920863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.920921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.921104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.921136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.921359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.921394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.921597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.921635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.921880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.921931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.922111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.922158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.922315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.922362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.922654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.922708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.922911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.922943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.923110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.923161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.923377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.923408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.923616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.923651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.923814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.923849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.924025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.924056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.924272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.924307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.924473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.924508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.924720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.924754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.924959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.924991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.925170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.925202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.925376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.925407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.925609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.925645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.925862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.925919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.926073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.926106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.926299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.926335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.926559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.926595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.926789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.926824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.927010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.927042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.300 qpair failed and we were unable to recover it. 00:37:32.300 [2024-07-13 13:49:06.927279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.300 [2024-07-13 13:49:06.927310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.927660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.927715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.927922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.927955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.928165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.928203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.928379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.928411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.928563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.928594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.928739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.928771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.928969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.929001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.929165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.929200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.929416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.929451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.929623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.929655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.929818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.929852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.930031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.930066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.930287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.930319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.930493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.930528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.930720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.930755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.930951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.930984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.931171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.931206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.931431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.931467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.931699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.931731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.931880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.931912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.932052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.932084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.932260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.932292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.932431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.932462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.932637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.932669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.932816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.932848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.933035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.933071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.933246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.933283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.933425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.933457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.933626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.933658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.933803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.301 [2024-07-13 13:49:06.933835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.301 qpair failed and we were unable to recover it. 00:37:32.301 [2024-07-13 13:49:06.934015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.934047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.934212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.934248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.934432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.934468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.934632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.934664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.934861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.934906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.935066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.935102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.935299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.935330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.935526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.935558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.935806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.935838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.936056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.936110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.936340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.936376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.936617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.936677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.936864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.936927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.937106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.937138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.937324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.937359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.937683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.937758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.937956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.937988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.938167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.938198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.938367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.938403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.938594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.938629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.938796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.938832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.939014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.939062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.939243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.939298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.939523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.939559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.939736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.939771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.939920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.939953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.940169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.940202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.940377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.940410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.940617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.940671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.940826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.940861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.302 qpair failed and we were unable to recover it. 00:37:32.302 [2024-07-13 13:49:06.941084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.302 [2024-07-13 13:49:06.941120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.941281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.941316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.941482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.941518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.941708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.941744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.941940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.941972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.942163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.942199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.942382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.942425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.942607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.942643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.942792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.942828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.943002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.943035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.943209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.943241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.943427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.943462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.943686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.943721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.943889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.943922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.944091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.944141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.944412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.944446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.944640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.944675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.944836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.944880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.945062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.945094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.945238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.945269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.945448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.945480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.945702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.945737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.945888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.945940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.946113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.946144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.946366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.946402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.303 [2024-07-13 13:49:06.946590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.303 [2024-07-13 13:49:06.946625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.303 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.946819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.946854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.947028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.947060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.947230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.947262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.947432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.947464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.947625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.947662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.947817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.947853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.948056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.948088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.948283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.948331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.948507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.948563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.948727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.948763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.948968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.949003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.949213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.949266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.949499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.949539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.949756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.949789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.949967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.950000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.950141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.950173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.950342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.950377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.950591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.950659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.950851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.950910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.951062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.951094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.951472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.951512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.951707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.951742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.951923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.951956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.952102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.952151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.952313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.952349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.952584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.952642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.952811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.952843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.953010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.953043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.304 [2024-07-13 13:49:06.953219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.304 [2024-07-13 13:49:06.953251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.304 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.953438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.953533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.953734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.953769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.953974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.954006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.954155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.954187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.954363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.954395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.954577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.954610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.954822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.954858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.955065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.955098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.955259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.955295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.955588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.955658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.955824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.955861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.956044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.956076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.956226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.956258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.956432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.956468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.956659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.956695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.956916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.956948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.957093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.957125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.957296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.957331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.957591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.957626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.957841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.957884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.958050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.958081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.958260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.958292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.958492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.958527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.958689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.958725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.958936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.958969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.959116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.959147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.959293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.959324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.959559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.959596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.959778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.959813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.960021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.960054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.960251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.960283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.960450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.305 [2024-07-13 13:49:06.960487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.305 qpair failed and we were unable to recover it. 00:37:32.305 [2024-07-13 13:49:06.960663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.960694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.960843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.960884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.961081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.961113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.961300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.961335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.961644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.961710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.961947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.961980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.962155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.962187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.962378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.962413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.962606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.962641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.962814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.962849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.963033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.963067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.963276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.963307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.963479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.963512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.963719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.963755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.963952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.963984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.964165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.964197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.964378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.964411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.964582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.964614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.964788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.964820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.964981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.965013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.965162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.965194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.965361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.965393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.965579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.965611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.965750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.965781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.965955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.965987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.966163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.966195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.966362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.966394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.966559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.966591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.966786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.966818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.966965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.966999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.967147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.967179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.967322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.967354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.306 qpair failed and we were unable to recover it. 00:37:32.306 [2024-07-13 13:49:06.967539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.306 [2024-07-13 13:49:06.967574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.967771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.967803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.967999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.968032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.968178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.968209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.968407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.968439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.968617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.968649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.968852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.968896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.969115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.969151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.969364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.969395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.969569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.969611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.969765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.969796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.969970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.970002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.970245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.970277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.970474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.970505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.970680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.970711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.970904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.970940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.971106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.971141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.971359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.971390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.971582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.971614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.971823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.971859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.972082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.972114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.972296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.972328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.972493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.972527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.972721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.972753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.972946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.972983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.973170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.973205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.973401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.973433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.973629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.973664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.973862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.973904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.974074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.974106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.974333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.974368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.974582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.974617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.974786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.974818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.974996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.975028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.975222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.307 [2024-07-13 13:49:06.975257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.307 qpair failed and we were unable to recover it. 00:37:32.307 [2024-07-13 13:49:06.975460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.975492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.975659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.975695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.975891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.975927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.976104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.976136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.976298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.976333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.976525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.976560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.976722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.976754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.976942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.976978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.977163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.977196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.977372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.977405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.977595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.977630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.977847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.977890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.978058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.978096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.978304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.978340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.978505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.978541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.978738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.978769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.978940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.978976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.979211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.979243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.979413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.979444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.979643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.979678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.979833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.979875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.980077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.980109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.980320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.980356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.980573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.980605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.980752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.980784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.980976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.981012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.981205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.981240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.981412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.981445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.981645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.981680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.981838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.981881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.982062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.982094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.982293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.982330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.982521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.982556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.982761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.982793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.982944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.308 [2024-07-13 13:49:06.982977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.308 qpair failed and we were unable to recover it. 00:37:32.308 [2024-07-13 13:49:06.983135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.983187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.983406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.983439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.983605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.983653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.983854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.983925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.984121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.984154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.984327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.984359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.984557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.984592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.984789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.984822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.985011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.985044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.985229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.985261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.985450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.985484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.985681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.985717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.985894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.985947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.986182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.986233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.986469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.986523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.986706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.986757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.986957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.986992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.987149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.987187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.987386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.987418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.987606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.987642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.987805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.987851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.309 [2024-07-13 13:49:06.988059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.309 [2024-07-13 13:49:06.988091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.309 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.988301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.988336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.988524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.988575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.988793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.988831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.989057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.989095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.989284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.989341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.989523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.989558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.989738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.989771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.989997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.990032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.990213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.990275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.990626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.990681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.990858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.990898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.991119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.991153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.991346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.991396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.991598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.991649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.991821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.991853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.992068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.992119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.992284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.992336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.992540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.992590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.992771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.992803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.992987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.993038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.993235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.993287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.993510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.993560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.587 [2024-07-13 13:49:06.993775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.587 [2024-07-13 13:49:06.993830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.587 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.994053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.994099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.994356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.994404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.994647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.994699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.994952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.995004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.995214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.995272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.995573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.995626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.995918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.995975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.996168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.996233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.996657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.996708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.996954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.997001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.997211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.997258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.997515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.997579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.997770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.997810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.997976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.998011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.998240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.998304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.998537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.998602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.998815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.998861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.999068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.999117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.999376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.999441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:06.999836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:06.999914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.000153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.000201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.000411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.000475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.000729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.000792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.001020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.001068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.001334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.001399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.001655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.001718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.001994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.002046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.002227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.002278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.002484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.002520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.002724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.002761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.002941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.002974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.003152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.003186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.003399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.003436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.003652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.003687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.003858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.003916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.004096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.004130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.004287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.004319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.004525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.004559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.004779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.004816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.588 qpair failed and we were unable to recover it. 00:37:32.588 [2024-07-13 13:49:07.005010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.588 [2024-07-13 13:49:07.005058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.005250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.005293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.005483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.005533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.005731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.005767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.005971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.006003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.006181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.006213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.006538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.006599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.006766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.006799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.006950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.006982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.007139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.007172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.007379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.007445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.007734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.007772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.008010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.008042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.008269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.008310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.008620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.008679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.008881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.008914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.009084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.009117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.009499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.009552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.009807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.009842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.010027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.010058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.010211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.010244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.010436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.010472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.010638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.010674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.010854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.010931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.011135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.011182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.011396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.011448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.011652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.011702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.011924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.011958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.012131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.012163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.012447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.012480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.012835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.012902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.013082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.013114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.013333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.013367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.013521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.013553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.013761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.013794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.013964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.013996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.014209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.014262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.014648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.014710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.014887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.014921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.015098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.015130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.589 qpair failed and we were unable to recover it. 00:37:32.589 [2024-07-13 13:49:07.015341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.589 [2024-07-13 13:49:07.015378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.015585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.015622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.015842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.015887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.016062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.016095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.016265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.016298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.016523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.016559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.016778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.016814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.017020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.017054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.017282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.017335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.017669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.017736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.017937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.017970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.018160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.018196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.018518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.018590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.018864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.018927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.019075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.019107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.019315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.019351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.019673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.019731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.019939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.019971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.020138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.020169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.020464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.020537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.020920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.020956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.021139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.021173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.021469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.021536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.021843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.021939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.022151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.022184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.022543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.022599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.022820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.022855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.023076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.023109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.023305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.023355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.023565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.023614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.023797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.023829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.024042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.024074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.024267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.024318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.024521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.024571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.024757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.024789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.025003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.025055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.025240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.025291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.025602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.025658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.025814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.025846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.026049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.026098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.590 qpair failed and we were unable to recover it. 00:37:32.590 [2024-07-13 13:49:07.026342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.590 [2024-07-13 13:49:07.026394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.026722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.026777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.026958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.026991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.027201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.027237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.027430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.027466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.027659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.027695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.027929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.027963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.028156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.028207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.028407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.028456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.028628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.028680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.028855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.028895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.029065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.029108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.029320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.029354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.029610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.029669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.029891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.029924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.030125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.030176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.030403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.030454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.030619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.030668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.030849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.030888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.031068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.031100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.031328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.031379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.031668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.031717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.031898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.031932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.032160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.032210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.032439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.032485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.032652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.032687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.032891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.032940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.033185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.033221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.033413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.033449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.033640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.033679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.033927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.033962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.034196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.034247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.034480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.591 [2024-07-13 13:49:07.034531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.591 qpair failed and we were unable to recover it. 00:37:32.591 [2024-07-13 13:49:07.034699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.034732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.034884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.034916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.035104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.035154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.035363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.035413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.035597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.035630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.035806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.035839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.036047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.036097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.036307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.036357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.036557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.036607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.036782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.036815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.037019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.037070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.037295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.037346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.037545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.037595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.037742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.037774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.037966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.038017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.038168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.038200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.038391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.038442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.038618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.038650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.038854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.038926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.039129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.039168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.039371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.039414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.039765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.039823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.040019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.040055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.040222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.040258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.040423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.040458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.040654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.040689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.040839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.040880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.041137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.041173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.041379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.041414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.041609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.041645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.041823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.041858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.042033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.042067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.042293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.042342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.042627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.042676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.042859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.042901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.043086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.043120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.043362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.043399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.043597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.043633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.043803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.043835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.044007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.044038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.592 [2024-07-13 13:49:07.044245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.592 [2024-07-13 13:49:07.044281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.592 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.044497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.044534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.044859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.044933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.045119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.045159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.045350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.045385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.045581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.045617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.045780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.045815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.046025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.046058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.046236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.046271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.046488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.046523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.046718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.046755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.046990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.047038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.047254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.047288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.047485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.047536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.047719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.047754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.047916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.047951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.048178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.048228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.048502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.048536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.048698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.048731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.048934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.048987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.049205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.049242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.049447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.049483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.049682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.049717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.049917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.049950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.050123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.050174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.050340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.050375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.050710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.050770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.050971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.051004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.051160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.051210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.051491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.051548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.051741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.051777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.051948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.051980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.052156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.052206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.052373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.052409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.052631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.052667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.052899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.052932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.053075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.053108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.053341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.053377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.053696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.053767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.053980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.054012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.054171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.593 [2024-07-13 13:49:07.054204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.593 qpair failed and we were unable to recover it. 00:37:32.593 [2024-07-13 13:49:07.054382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.054414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.054612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.054647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.054817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.054852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.055034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.055066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.055307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.055356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.055521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.055557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.055743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.055783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.056007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.056042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.056198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.056231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.056462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.056518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.056737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.056787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.057012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.057045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.057192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.057224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.057421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.057456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.057626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.057662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.057916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.057951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.058105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.058138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.058315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.058349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.058622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.058658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.058849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.058894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.059091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.059122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.059362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.059398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.059573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.059608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.059809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.059843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.060045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.060078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.060276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.060312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.060479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.060513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.060755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.060789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.060972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.061003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.061195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.061229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.061419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.061454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.061649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.061684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.061842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.061884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.062129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.062176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.062380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.062432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.062670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.062721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.062910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.062944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.063143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.063194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.063394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.063444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.063815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.063874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.064082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.594 [2024-07-13 13:49:07.064115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.594 qpair failed and we were unable to recover it. 00:37:32.594 [2024-07-13 13:49:07.064343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.064393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.064628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.064678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.064889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.064923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.065115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.065172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.065388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.065426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.065621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.065663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.065824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.065858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.066047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.066079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.066254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.066289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.066486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.066520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.066767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.066802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.067037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.067068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.067228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.067264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.067629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.067684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.067916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.067948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.068128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.068159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.068456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.068520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.068720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.068755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.068937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.068970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.069169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.069216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.069448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.069499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.069713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.069763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.069964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.070009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.070187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.070239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.070417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.070466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.070663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.070713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.070928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.070979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.071143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.071194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.071545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.071607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.071813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.071845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.072031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.072064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.072270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.072305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.072500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.072535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.072735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.072771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.072986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.073020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.073191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.073242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.073437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.073487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.073673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.073706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.073895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.073948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.074148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.595 [2024-07-13 13:49:07.074197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.595 qpair failed and we were unable to recover it. 00:37:32.595 [2024-07-13 13:49:07.074511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.074567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.074770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.074802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.075021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.075072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.075314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.075351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.075545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.075581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.075760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.075835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.076051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.076103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.076300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.076351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.076584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.076634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.076806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.076839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.077060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.077111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.077287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.077339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.077660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.077717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.077940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.077990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.078196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.078245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.078471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.078521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.078698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.078730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.078926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.078978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.079188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.079237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.079462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.079512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.079684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.079717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.079919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.079971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.080150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.080184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.080411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.080461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.080639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.080672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.080846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.080884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.081116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.081167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.081429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.081480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.081706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.081744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.081947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.081981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.082161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.082196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.082366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.082417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.082617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.082652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.082879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.082914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.596 [2024-07-13 13:49:07.083070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.596 [2024-07-13 13:49:07.083103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.596 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.083299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.083348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.083571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.083622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.083801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.083833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.083998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.084031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.084209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.084246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.084442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.084477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.084670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.084705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.084898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.084931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.085106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.085153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.085342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.085376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.085609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.085648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.085863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.085926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.086124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.086173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.086474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.086531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.086750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.086785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.086983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.087016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.087188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.087219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.087384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.087418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.087633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.087668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.087835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.087876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.088067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.088100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.088303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.088338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.088631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.088667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.088888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.088939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.089087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.089119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.089356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.089390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.089611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.089647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.089809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.089844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.090049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.090080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.090316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.090351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.090570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.090605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.090791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.090826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.091060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.091093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.091293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.091328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.091561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.091618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.091780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.091816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.092044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.092077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.092268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.092315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.092527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.092580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.092803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.597 [2024-07-13 13:49:07.092837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.597 qpair failed and we were unable to recover it. 00:37:32.597 [2024-07-13 13:49:07.093032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.093065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.093271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.093308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.093628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.093684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.093893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.093957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.094139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.094190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.094367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.094403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.094719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.094782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.094966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.094999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.095195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.095230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.095495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.095530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.095730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.095770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.095962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.095995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.096170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.096201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.096396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.096432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.096646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.096681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.096880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.096913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.097069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.097100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.097251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.097283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.097513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.097568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.097768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.097806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.097994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.098026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.098211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.098258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.098417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.098453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.098639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.098691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.098876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.098910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.099104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.099136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.099355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.099389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.099587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.099637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.099810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.099843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.100058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.100092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.100298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.100347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.100532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.100565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.100741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.100773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.100974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.101007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.101173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.101224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.101426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.101478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.101655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.101687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.101839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.101881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.102055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.102105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.102326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.598 [2024-07-13 13:49:07.102378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.598 qpair failed and we were unable to recover it. 00:37:32.598 [2024-07-13 13:49:07.102610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.102647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.102853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.102894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.103098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.103134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.103341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.103377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.103548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.103583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.103766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.103801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.103971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.104003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.104202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.104252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.104629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.104686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.104888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.104937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.105120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.105176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.105535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.105599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.105815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.105847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.106035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.106068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.106266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.106298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.106498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.106534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.106731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.106766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.106944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.106978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.107159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.107191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.107332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.107383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.107554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.107589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.107767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.107799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.107971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.108004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.108350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.108406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.108604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.108639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.108818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.108855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.109054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.109091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.109286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.109322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.109517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.109552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.109774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.109822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.110025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.110072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.110305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.110356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.110710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.110769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.110978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.111013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.111188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.111238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.111505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.111563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.111738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.111771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.111961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.112012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.112213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.112262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.112466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.112516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.112718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.599 [2024-07-13 13:49:07.112750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.599 qpair failed and we were unable to recover it. 00:37:32.599 [2024-07-13 13:49:07.112924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.112976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.113148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.113199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.113392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.113442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.113642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.113674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.113850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.113888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.114094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.114143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.114335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.114385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.114583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.114632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.114784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.114816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.115030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.115086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.115300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.115333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.115533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.115585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.115737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.115770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.115987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.116038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.116228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.116261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.116429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.116480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.116642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.116674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.116879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.116911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.117105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.117155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.117392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.117441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.117619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.117672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.117819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.117852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.118029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.118079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.118280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.118330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.118546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.118597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.118772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.118804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.119008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.119059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.119279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.119330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.119519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.119570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.119785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.119821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.120057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.120090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.120294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.120329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.120495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.120531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.120725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.120760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.600 qpair failed and we were unable to recover it. 00:37:32.600 [2024-07-13 13:49:07.120960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.600 [2024-07-13 13:49:07.120993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.121166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.121201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.121453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.121488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.121643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.121678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.121880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.121930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.122109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.122140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.122338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.122373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.122592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.122628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.122822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.122857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.123064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.123096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.123300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.123336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.123531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.123566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.123722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.123757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.123960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.123993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.124168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.124200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.124392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.124433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.124591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.124626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.124843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.124885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.125103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.125150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.125367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.125402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.125598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.125633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.125878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.125928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.126076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.126108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.126279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.126314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.126490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.126525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.126741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.126776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.126998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.127031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.127238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.127273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.127459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.127494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.127684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.127719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.127945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.127977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.128181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.128218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.128377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.128413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.128668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.128704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.128871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.128927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.129084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.129127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.129305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.129337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.129564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.129599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.129790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.129825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.130032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.130064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.130230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.601 [2024-07-13 13:49:07.130265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.601 qpair failed and we were unable to recover it. 00:37:32.601 [2024-07-13 13:49:07.130614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.130666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.130885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.130919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.131067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.131099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.131298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.131333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.131663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.131727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.131909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.131941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.132091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.132123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.132342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.132373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.132589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.132645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.132858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.132903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.133100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.133141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.133312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.133344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.133711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.133771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.133976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.134009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.134206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.134247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.134467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.134502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.134725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.134760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.134923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.134955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.135148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.135183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.135373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.135409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.135656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.135711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.135944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.135976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.136153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.136188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.136401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.136436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.136623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.136658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.136851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.136890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.137094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.137129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.137314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.137349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.137649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.137707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.137918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.137950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.138154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.138191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.138407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.138443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.138758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.138816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.139021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.139053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.139232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.139264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.139438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.139473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.139696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.139728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.139899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.139933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.140158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.140193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.140386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.140423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.602 [2024-07-13 13:49:07.140610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.602 [2024-07-13 13:49:07.140646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.602 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.140814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.140847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.141050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.141086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.141302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.141338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.141705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.141741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.141964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.141997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.142154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.142186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.142378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.142413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.142602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.142637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.142835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.142874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.143112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.143148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.143316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.143351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.143543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.143578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.143749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.143781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.143957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.143994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.144153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.144185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.144417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.144471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.144677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.144709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.144864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.144933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.145126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.145167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.145362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.145415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.145592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.145625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.145798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.145834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.146067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.146115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.146312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.146350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.146524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.146557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.146713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.146746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.146939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.146971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.147213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.147246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.147416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.147448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.147621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.147657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.147826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.147863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.148053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.148089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.148260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.148292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.148446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.148477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.148649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.148681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.148890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.148946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.149154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.149186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.149456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.149513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.149775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.149830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.603 [2024-07-13 13:49:07.150019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.603 [2024-07-13 13:49:07.150056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.603 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.150261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.150292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.150456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.150491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.150696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.150728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.150937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.150973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.151176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.151208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.151427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.151480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.151704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.151740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.151938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.151974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.152151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.152182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.152400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.152436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.152660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.152692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.152835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.152872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.153067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.153099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.153305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.153388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.153550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.153585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.153755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.153791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.153982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.154014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.154210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.154251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.154500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.154532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.154719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.154754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.154958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.154990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.155159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.155194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.155377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.155436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.155607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.155639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.155797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.155830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.156005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.156041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.156190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.156225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.156420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.156451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.156656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.156688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.156881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.156924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.157126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.157158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.157356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.157405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.157605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.157637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.157835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.604 [2024-07-13 13:49:07.157887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.604 qpair failed and we were unable to recover it. 00:37:32.604 [2024-07-13 13:49:07.158078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.158110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.158335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.158370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.158593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.158625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.158812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.158848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.159034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.159070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.159265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.159301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.159493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.159525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.159706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.159738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.159916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.159952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.160149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.160184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.160354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.160386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.160586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.160640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.160879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.160913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.161064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.161095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.161234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.161266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.161475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.161538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.161734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.161769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.161954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.161987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.162141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.162174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.162381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.162440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.162724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.162778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.162975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.163011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.163187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.163219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.163412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.163448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.163636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.163672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.163898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.163935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.164101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.164133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.164311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.164352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.164541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.164573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.164769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.164805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.164999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.165032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.165194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.165230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.165447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.165501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.165705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.165740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.165941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.165974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.166126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.166158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.166299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.166349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.166570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.166602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.166806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.166838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.167010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.605 [2024-07-13 13:49:07.167046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.605 qpair failed and we were unable to recover it. 00:37:32.605 [2024-07-13 13:49:07.167284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.167316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.167523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.167577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.167755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.167787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.167954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.167990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.168151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.168186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.168428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.168481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.168661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.168693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.168857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.168899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.169124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.169160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.169322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.169358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.169582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.169615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.169768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.169800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.169968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.170004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.170167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.170203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.170401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.170435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.170598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.170634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.170824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.170860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.171090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.171126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.171300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.171342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.171541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.171595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.171761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.171797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.172000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.172033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.172184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.172215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.172452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.172504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.172694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.172730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.172947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.172979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.173150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.173182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.173377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.173412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.173575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.173610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.173769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.173806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.174035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.174068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.174236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.174272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.174463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.174499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.174717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.174752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.174985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.175018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.175220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.175253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.175435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.175468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.175624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.175656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.175855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.175896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.176070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.176119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.176310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.606 [2024-07-13 13:49:07.176346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.606 qpair failed and we were unable to recover it. 00:37:32.606 [2024-07-13 13:49:07.176538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.176573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.176768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.176800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.177003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.177040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.177230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.177266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.177457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.177492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.177688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.177725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.177922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.177955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.178130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.178163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.178312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.178344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.178517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.178551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.178693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.178725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.178908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.178942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.179085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.179118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.179298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.179331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.179522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.179566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.179762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.179799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.179992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.180029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.180233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.180266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.180443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.180476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.180649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.180685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.180890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.180923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.181103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.181135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.181357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.181393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.181596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.181628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.181823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.181859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.182053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.182085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.182289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.182324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.182520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.182555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.182725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.182761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.182948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.182981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.183151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.183186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.183386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.183418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.183573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.183606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.183791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.183822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.183989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.184022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.184186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.184222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.184420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.184456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.184695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.184728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.184956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.607 [2024-07-13 13:49:07.184992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.607 qpair failed and we were unable to recover it. 00:37:32.607 [2024-07-13 13:49:07.185183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.185226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.185412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.185448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.185671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.185703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.185944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.185977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.186147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.186183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.186377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.186417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.186607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.186645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.186893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.186929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.187103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.187138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.187356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.187391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.187561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.187592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.187741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.187773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.187933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.187965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.188139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.188175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.188399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.188431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.188633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.188670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.188862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.188909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.189083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.189119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.189298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.189330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.189518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.189553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.189750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.189796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.189973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.190009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.190175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.190207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.190406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.190453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.190639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.190674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.190845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.190889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.191071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.191103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.191259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.191295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.191484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.191519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.191734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.191765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.191930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.191963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.192122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.192157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.192372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.192407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.192579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.192621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.192842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.192881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.193061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.193096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.193306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.193338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.193524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.193560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.193783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.193816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.194004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.608 [2024-07-13 13:49:07.194036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.608 qpair failed and we were unable to recover it. 00:37:32.608 [2024-07-13 13:49:07.194269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.194305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.194524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.194556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.194727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.194758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.194973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.195009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.195208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.195241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.195418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.195450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.195655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.195691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.195914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.195952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.196113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.196151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.196347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.196384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.196585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.196616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.196842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.196883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.197055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.197091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.197332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.197364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.197573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.197604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.197778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.197813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.198046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.198078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.198260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.198292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.198530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.198562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.198753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.198788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.198961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.198998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.199183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.199218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.199437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.199469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.199693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.199729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.199899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.199934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.200127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.200163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.200380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.200412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.200608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.200643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.200838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.200880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.201076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.201111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.201293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.201325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.201516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.201552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.201717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.201752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.201958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.201991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.609 qpair failed and we were unable to recover it. 00:37:32.609 [2024-07-13 13:49:07.202169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.609 [2024-07-13 13:49:07.202200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.202371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.202407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.202600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.202636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.202802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.202837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.203017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.203049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.203241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.203277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.203440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.203476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.203636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.203671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.203830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.203862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.204100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.204136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.204337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.204373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.204579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.204610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.204762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.204798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.205021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.205091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.205258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.205293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.205488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.205524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.205738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.205782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.205962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.205997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.206187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.206222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.206420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.206452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.206594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.206626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.206816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.206851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.207096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.207137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.207337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.207373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.207577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.207609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.207813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.207848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.208046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.208081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.208310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.208346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.208521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.208553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.208730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.208762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.208957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.208992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.209194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.209231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.209434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.209466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.209633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.209668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.209853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.209908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.210111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.210144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.210355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.210387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.210566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.210601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.210782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.210817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.211028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.211064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.211265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.610 [2024-07-13 13:49:07.211298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.610 qpair failed and we were unable to recover it. 00:37:32.610 [2024-07-13 13:49:07.211468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.211503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.211675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.211710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.211872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.211908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.212084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.212116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.212303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.212338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.212532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.212563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.212706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.212755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.212989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.213022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.213264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.213299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.213483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.213519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.213719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.213757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.213965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.214002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.214157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.214188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.214409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.214445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.214627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.214663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.214852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.214891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.215052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.215083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.215254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.215286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.215475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.215510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.215692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.215724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.215927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.215963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.216147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.216183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.216371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.216406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.216577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.216609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.216814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.216850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.217061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.217096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.217322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.217357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.217525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.217558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.217737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.217769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.217938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.217974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.218191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.218227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.218423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.218460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.218621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.218657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.218846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.218899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.219098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.219130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.219293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.219325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.219493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.219540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.219747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.219779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.219973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.220009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.220231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.611 [2024-07-13 13:49:07.220263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.611 qpair failed and we were unable to recover it. 00:37:32.611 [2024-07-13 13:49:07.220455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.220492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.220720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.220755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.220923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.220960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.221133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.221166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.221399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.221435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.221591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.221627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.221815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.221850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.222054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.222086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.222250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.222285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.222443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.222479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.222685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.222717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.222871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.222908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.223126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.223161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.223359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.223391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.223589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.223625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.223836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.223874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.224097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.224132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.224323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.224358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.224524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.224561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.224789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.224821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.225015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.225050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.225241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.225276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.225470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.225505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.225702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.225734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.225908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.225945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.226183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.226216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.226363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.226394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.226592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.226624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.226790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.226825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.227022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.227054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.227230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.227262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.227437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.227469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.227614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.227646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.227855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.227912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.228092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.228128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.228293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.228326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.228516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.228550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.228741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.228794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.229028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.229065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.229250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.229284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.612 [2024-07-13 13:49:07.229462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.612 [2024-07-13 13:49:07.229494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.612 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.229670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.229726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.229930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.229964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.230164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.230197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.230347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.230379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.230611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.230665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.230879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.230931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.231104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.231147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.231299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.231331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.231567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.231622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.231833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.231872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.232052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.232090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.232297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.232329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.232504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.232541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.232719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.232755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.232957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.232990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.233144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.233177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.233369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.233406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.233567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.233603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.233802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.233834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.234005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.234037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.234219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.234255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.234428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.234464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.234660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.234692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.234874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.234909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.235097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.235130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.235318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.235354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.235555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.235588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.235789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.235822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.236009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.236042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.236213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.236246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.236457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.236490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.236697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.236730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.236905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.236957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.237116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.237149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.613 qpair failed and we were unable to recover it. 00:37:32.613 [2024-07-13 13:49:07.237361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.613 [2024-07-13 13:49:07.237393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.237591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.237624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.237808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.237842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.238037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.238070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.238250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.238283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.238457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.238490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.238716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.238752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.238951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.238985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.239183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.239216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.239279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:37:32.614 [2024-07-13 13:49:07.239529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.239576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.239916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.239952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.240133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.240166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.240338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.240374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.240566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.240601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.240794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.240826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.241005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.241037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.241248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.241284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.241477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.241509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.241702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.241737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.241939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.241972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.242145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.242178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.242380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.242417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.242617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.242653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.242842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.242882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.243093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.243126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.243336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.243373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.243590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.243622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.243788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.243824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.244030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.244063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.244234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.244271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.244523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.244578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.244785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.244818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.245005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.245050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.245245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.245281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.245457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.245493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.245687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.245719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.245946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.245979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.246155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.246187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.246364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.246395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.246572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.614 [2024-07-13 13:49:07.246604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.614 qpair failed and we were unable to recover it. 00:37:32.614 [2024-07-13 13:49:07.246773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.246808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.247017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.247050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.247208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.247244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.247464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.247500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.247677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.247713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.247923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.247957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.248157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.248189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.248393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.248425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.248646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.248683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.248880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.248916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.249098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.249131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.249320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.249356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.249547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.249587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.249761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.249793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.249950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.249983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.250178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.250215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.250416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.250448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.250629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.250661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.250855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.250900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.251100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.251132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.251352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.251387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.251549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.251585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.251761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.251793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.251970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.252003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.252162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.252198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.252372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.252404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.252552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.252584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.252751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.252801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.253024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.253057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.253254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.253295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.253483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.253519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.253725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.253758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.253935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.253968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.254137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.254170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.254363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.254394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.254609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.254682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.254886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.254935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.255109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.255141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.255337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.255373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.255589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.255624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.615 qpair failed and we were unable to recover it. 00:37:32.615 [2024-07-13 13:49:07.255845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.615 [2024-07-13 13:49:07.255886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.256080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.256115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.256308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.256344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.256549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.256581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.256779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.256810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.256997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.257032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.257215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.257248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.257414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.257447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.257651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.257695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.257887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.257920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.258093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.258126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.258298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.258330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.258487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.258521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.258700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.258732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.258883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.258917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.259070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.259103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.259252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.259296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.259454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.259486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.259687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.259721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.259894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.259927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.260104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.260137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.260311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.260343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.260530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.260562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.260767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.260812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.261029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.261061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.261260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.261293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.261463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.261495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.261679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.261711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.261879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.261911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.262085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.262124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.262270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.262302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.262481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.262513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.262656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.262686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.262893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.262926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.263078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.263110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.263273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.263305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.263504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.263536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.263708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.263746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.263921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.263953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.264130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.264162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.264360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.616 [2024-07-13 13:49:07.264392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.616 qpair failed and we were unable to recover it. 00:37:32.616 [2024-07-13 13:49:07.264592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.264629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.264820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.264875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.265092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.265131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.265312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.265346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.265531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.265585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.265791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.265823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.265990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.266023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.266231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.266264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.266478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.266512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.266734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.266783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.266994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.267028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.267229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.267280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.267510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.267560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.267707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.267739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.267980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.268031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.268243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.268292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.268505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.268537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.268712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.268744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.268935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.268968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.269148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.269196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.269367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.269404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.269602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.269634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.269818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.269849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.270015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.270047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.270254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.270289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.270613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.270667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.270860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.270935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.271095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.271148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.271416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.271479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.271726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.271786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.272007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.272039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.272234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.272265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.272472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.272509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.272724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.272755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.617 [2024-07-13 13:49:07.272938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.617 [2024-07-13 13:49:07.272970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.617 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.273144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.273175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.273343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.273378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.273709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.273769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.273992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.274024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.274198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.274233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.274413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.274448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.274626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.274675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.274880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.274942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.275116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.275152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.275320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.275355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.275550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.275594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.275845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.275889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.276108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.276155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.276322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.276357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.276551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.276586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.276775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.276810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.277040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.277073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.277236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.277268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.277478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.277513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.277732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.277768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.277986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.278019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.278218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.278253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.278422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.278468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.278684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.278719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.278884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.278940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.279117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.279152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.279331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.279362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.279564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.279599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.279820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.279853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.280050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.280082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.280286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.280334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.280555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.280591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.280780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.280815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.280983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.281019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.281217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.281254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.281449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.281484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.281684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.281720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.281940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.281973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.282168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.282200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.282384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.618 [2024-07-13 13:49:07.282419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.618 qpair failed and we were unable to recover it. 00:37:32.618 [2024-07-13 13:49:07.282588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.282625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.282839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.282881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.283083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.283115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.283318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.283353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.283534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.283589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.283758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.283789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.283947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.283979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.284165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.284196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.284395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.284430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.284644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.284679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.284878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.284910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.285068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.285099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.285270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.285301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.285525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.285579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.285797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.285831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.286011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.286050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.286229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.286261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.286459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.286494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.286664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.286700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.286980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.287013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.287202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.287237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.287405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.287440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.287640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.287672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.287883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.287920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.288075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.288110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.288335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.288367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.288529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.288564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.288756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.288793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.288974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.289016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.289193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.289229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.289447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.289482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.289689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.289721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.289891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.289926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.290123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.290163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.290354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.290386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.290581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.290617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.290812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.290847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.291042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.291075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.291270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.291306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.291470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.291504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.619 [2024-07-13 13:49:07.291723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.619 [2024-07-13 13:49:07.291754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.619 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.291954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.291991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.292199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.292234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.292438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.292470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.292640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.292675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.292863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.292905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.293128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.293170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.293373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.293405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.293574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.293609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.293832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.293864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.294076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.294111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.294313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.294348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.294530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.294562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.294734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.294766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.294944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.294976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.295215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.295247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.295441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.295476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.295643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.295678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.295893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.295947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.296097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.296129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.296326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.296362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.296539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.296571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.296769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.296801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.296998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.297034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.297259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.297292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.297479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.297511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.297705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.297741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.297943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.297976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.298170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.298205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.298395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.298430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.298651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.298683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.298888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.298924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.299092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.299129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.299298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.299334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.299532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.299568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.299785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.299826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.300017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.300050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.300233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.300270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.300487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.300522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.300726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.300758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.300930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.620 [2024-07-13 13:49:07.300966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.620 qpair failed and we were unable to recover it. 00:37:32.620 [2024-07-13 13:49:07.301182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.301217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.301415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.301452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.301615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.301651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.301815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.301850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.302045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.302077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.302257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.302289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.302537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.302570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.302768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.302800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.303016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.303052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.303270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.303305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.303501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.303533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.303729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.303764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.303946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.303982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.304195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.304227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.304420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.304455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.304683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.304717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.304961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.304994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.305186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.305221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.305408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.305444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.305628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.305659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.305833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.305871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.306033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.306065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.306242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.306274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.306470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.306506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.306673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.306708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.306885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.306917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.307068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.307100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.307247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.307279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.307456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.307488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.307683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.307730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.307887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.307946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.308145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.308177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.308369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.308409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.308603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.308638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.308805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.308837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.309021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.309068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.309274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.309307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.309459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.309491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.309641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.621 [2024-07-13 13:49:07.309672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.621 qpair failed and we were unable to recover it. 00:37:32.621 [2024-07-13 13:49:07.309893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.622 [2024-07-13 13:49:07.309940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.622 qpair failed and we were unable to recover it. 00:37:32.622 [2024-07-13 13:49:07.310114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.622 [2024-07-13 13:49:07.310146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.622 qpair failed and we were unable to recover it. 00:37:32.622 [2024-07-13 13:49:07.310344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.622 [2024-07-13 13:49:07.310392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.622 qpair failed and we were unable to recover it. 00:37:32.622 [2024-07-13 13:49:07.310617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.622 [2024-07-13 13:49:07.310652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.622 qpair failed and we were unable to recover it. 00:37:32.622 [2024-07-13 13:49:07.310904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.622 [2024-07-13 13:49:07.310939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.622 qpair failed and we were unable to recover it. 00:37:32.622 [2024-07-13 13:49:07.311127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.622 [2024-07-13 13:49:07.311189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.622 qpair failed and we were unable to recover it. 00:37:32.622 [2024-07-13 13:49:07.311401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.622 [2024-07-13 13:49:07.311445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.622 qpair failed and we were unable to recover it. 00:37:32.622 [2024-07-13 13:49:07.311641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.622 [2024-07-13 13:49:07.311677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.622 qpair failed and we were unable to recover it. 00:37:32.622 [2024-07-13 13:49:07.311828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.622 [2024-07-13 13:49:07.311861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.622 qpair failed and we were unable to recover it. 00:37:32.622 [2024-07-13 13:49:07.312019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.622 [2024-07-13 13:49:07.312052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.622 qpair failed and we were unable to recover it. 00:37:32.622 [2024-07-13 13:49:07.312218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.622 [2024-07-13 13:49:07.312251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.622 qpair failed and we were unable to recover it. 00:37:32.622 [2024-07-13 13:49:07.312492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.622 [2024-07-13 13:49:07.312529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.622 qpair failed and we were unable to recover it. 00:37:32.622 [2024-07-13 13:49:07.312770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.622 [2024-07-13 13:49:07.312811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.622 qpair failed and we were unable to recover it. 00:37:32.622 [2024-07-13 13:49:07.312978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.622 [2024-07-13 13:49:07.313013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.622 qpair failed and we were unable to recover it. 00:37:32.622 [2024-07-13 13:49:07.313184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.622 [2024-07-13 13:49:07.313234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.622 qpair failed and we were unable to recover it. 00:37:32.898 [2024-07-13 13:49:07.313453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.313490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.313690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.313724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.313909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.313958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.314142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.314175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.314341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.314374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.314555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.314588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.314789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.314821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.315005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.315038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.315190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.315224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.315422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.315455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.315627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.315660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.315828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.315861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.316054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.316087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.316261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.316294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.316510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.316565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.316726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.316763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.317004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.317038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.317227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.317274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.317491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.317535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.317737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.317770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.318006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.318039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.318232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.318269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.318463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.318495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.318690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.318726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.318941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.318975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.319115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.319147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.319343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.319379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.319571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.319607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.319807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.319840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.320053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.320086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.899 [2024-07-13 13:49:07.320306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.899 [2024-07-13 13:49:07.320342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.899 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.320510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.320543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.320751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.320787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.320969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.321003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.321205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.321237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.321398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.321434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.321594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.321630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.321830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.321862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.322047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.322079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.322320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.322352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.322538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.322571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.322797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.322832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.323035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.323068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.323217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.323250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.323487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.323523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.323692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.323729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.323928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.323961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.324135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.324184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.324412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.324444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.324615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.324647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.324836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.324877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.325054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.325086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.325293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.325325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.325540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.325593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.325809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.325845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.326076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.326109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.326314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.326350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.326544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.326580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.326781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.326819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.326984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.327017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.327169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.327202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.327402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.327434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.327603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.900 [2024-07-13 13:49:07.327640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.900 qpair failed and we were unable to recover it. 00:37:32.900 [2024-07-13 13:49:07.327833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.327878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.328050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.328082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.328274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.328320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.328472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.328505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.328707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.328739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.328970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.329005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.329160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.329192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.329361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.329393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.329584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.329619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.329788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.329823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.330048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.330081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.330289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.330343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.330508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.330539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.330688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.330720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.330903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.330935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.331115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.331146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.331355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.331387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.331596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.331652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.331836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.331878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.332080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.332112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.332299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.332335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.332547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.332579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.332765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.332797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.332992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.333025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.333190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.333228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.333429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.333461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.333658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.333694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.333893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.333944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.334114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.334146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.901 qpair failed and we were unable to recover it. 00:37:32.901 [2024-07-13 13:49:07.334294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.901 [2024-07-13 13:49:07.334325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.334518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.334554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.334796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.334831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.335008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.335041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.335245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.335281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.335486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.335518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.335715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.335755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.335965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.335998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.336167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.336199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.336344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.336375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.336518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.336550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.336719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.336751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.336890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.336923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.337100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.337132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.337340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.337372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.337696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.337756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.337982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.338015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.338226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.338257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.338465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.338500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.338693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.338728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.338950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.338983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.339230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.339282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.339498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.339536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.339737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.339771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.339947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.339980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.340176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.340223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.340419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.340450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.340692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.340746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.340955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.340989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.341172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.341203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.341368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.341400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.341595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.341630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.902 qpair failed and we were unable to recover it. 00:37:32.902 [2024-07-13 13:49:07.341822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.902 [2024-07-13 13:49:07.341857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.342098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.342145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.342358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.342396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.342593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.342625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.342799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.342837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.343070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.343102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.343275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.343309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.343493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.343525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.343721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.343757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.343955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.343998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.344186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.344239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.344447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.344485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.344668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.344701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.344857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.344897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.345089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.345126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.345295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.345327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.345548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.345584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.345779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.345814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.346003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.346035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.346265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.346301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.346511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.346548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.346744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.346776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.346992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.347028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.347248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.347280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.347484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.347516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.347746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.347780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.347976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.348009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.348158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.348190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.348395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.348430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.348628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.348663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.348839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.348892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.903 qpair failed and we were unable to recover it. 00:37:32.903 [2024-07-13 13:49:07.349094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.903 [2024-07-13 13:49:07.349126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.349298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.349334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.349510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.349541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.349716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.349748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.349948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.350008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.350236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.350268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.350497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.350538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.350700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.350735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.350926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.350959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.351117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.351149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.351331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.351368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.351581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.351613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.351805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.351840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.352026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.352058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.352227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.352259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.352412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.352444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.352633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.352672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.352898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.352931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.353151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.353186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.353393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.353425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.353573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.353605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.353798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.353834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.354043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.354075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.354272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.354304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.904 [2024-07-13 13:49:07.354626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.904 [2024-07-13 13:49:07.354686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.904 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.354887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.354938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.355095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.355126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.355303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.355335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.355534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.355569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.355769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.355802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.355996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.356032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.356207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.356243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.356420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.356452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.356605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.356637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.356828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.356863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.357089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.357133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.357327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.357362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.357559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.357595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.357785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.357816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.358003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.358036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.358234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.358269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.358474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.358506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.358693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.358728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.358922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.358959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.359141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.359173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.359394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.359430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.359622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.359658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.359880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.359913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.360109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.360144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.360303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.360339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.360560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.360596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.360769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.360813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.360999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.361035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.905 [2024-07-13 13:49:07.361232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.905 [2024-07-13 13:49:07.361265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.905 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.361483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.361518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.361727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.361762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.361982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.362015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.362214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.362250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.362477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.362512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.362726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.362758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.362929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.362970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.363161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.363197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.363421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.363453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.363658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.363694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.363860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.363932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.364091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.364123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.364318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.364353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.364574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.364610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.364815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.364846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.365106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.365142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.365299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.365334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.365513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.365545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.365742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.365777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.365981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.366018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.366214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.366245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.366445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.366480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.366702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.366733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.366922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.366955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.367155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.367190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.367387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.367423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.367620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.367652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.367845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.367888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.368099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.368133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.368336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.368368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.368562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.368599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.906 [2024-07-13 13:49:07.368787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.906 [2024-07-13 13:49:07.368823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.906 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.369044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.369087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.369274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.369320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.369483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.369515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.369690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.369722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.369909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.369978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.370178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.370214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.370392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.370423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.370580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.370611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.370799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.370834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.371054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.371087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.371279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.371314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.371503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.371539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.371734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.371766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.371971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.372032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.372244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.372280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.372508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.372544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.372786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.372818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.373004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.373037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.373262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.373294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.373486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.373522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.373733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.373768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.374007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.374040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.374235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.374270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.374484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.374519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.374685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.374716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.374938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.374974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.375206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.375241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.375464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.375496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.375671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.375705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.375988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.376024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.376223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.376255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.376488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.907 [2024-07-13 13:49:07.376525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.907 qpair failed and we were unable to recover it. 00:37:32.907 [2024-07-13 13:49:07.376744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.376779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.376988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.377023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.377166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.377198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.377352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.377384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.377529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.377561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.377757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.377793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.377991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.378027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.378207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.378239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.378388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.378421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.378634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.378666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.378878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.378918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.379116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.379152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.379351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.379387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.379566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.379598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.379804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.379839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.380063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.380100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.380275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.380312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.380489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.380521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.380698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.380734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.380960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.380994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.381181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.381213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.381409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.381446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.381647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.381678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.381859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.381901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.382112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.382144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.382345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.382377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.382625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.382660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.382882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.382935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.383135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.383166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.908 qpair failed and we were unable to recover it. 00:37:32.908 [2024-07-13 13:49:07.383308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.908 [2024-07-13 13:49:07.383359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.383578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.383610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.383802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.383835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.384051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.384083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.384286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.384322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.384526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.384558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.384723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.384759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.384934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.384969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.385167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.385199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.385398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.385433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.385641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.385676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.385879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.385915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.386142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.386178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.386371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.386404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.386558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.386590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.386777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.386812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.386992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.387040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.387212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.387243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.387440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.387475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.387693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.387728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.387926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.387959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.388121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.388153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.388334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.388366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.388513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.388549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.388748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.909 [2024-07-13 13:49:07.388783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.909 qpair failed and we were unable to recover it. 00:37:32.909 [2024-07-13 13:49:07.388970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.389008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.389179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.389211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.389435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.389470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.389689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.389724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.389967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.390006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.390209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.390245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.390441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.390476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.390679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.390711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.390922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.390958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.391151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.391183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.391359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.391390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.391596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.391632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.391796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.391832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.392040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.392072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.392241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.392276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.392493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.392528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.392729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.392760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.392946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.392978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.393200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.393235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.393425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.393456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.393605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.393637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.393788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.393819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.393994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.394026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.394218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.394253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.394470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.394505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.394743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.394775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.910 [2024-07-13 13:49:07.394957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.910 [2024-07-13 13:49:07.394992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.910 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.395181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.395216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.395390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.395422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.395619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.395668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.395895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.395936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.396138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.396170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.396388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.396423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.396582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.396617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.396778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.396810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.397020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.397056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.397252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.397287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.397510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.397542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.397729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.397769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.397942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.397978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.398151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.398183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.398371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.398407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.398568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.398603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.398815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.398850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.399034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.399066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.399293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.399328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.399554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.399586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.399777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.399812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.400016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.400051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.400281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.400313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.400467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.400499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.400677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.400709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.400918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.400952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.401144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.401181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.401374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.401409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.401603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.401645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.911 qpair failed and we were unable to recover it. 00:37:32.911 [2024-07-13 13:49:07.401888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.911 [2024-07-13 13:49:07.401927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.402099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.402133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.402285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.402316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.402484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.402514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.402704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.402735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.402956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.402989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.403129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.403161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.403348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.403383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.403554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.403586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.403731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.403762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.403980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.404016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.404205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.404237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.404425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.404460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.404658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.404690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.404876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.404908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.405082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.405114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.405338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.405373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.405596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.405628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.405832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.405872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.406045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.406081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.406242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.406274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.406466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.406502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.406719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.406759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.406973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.407005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.407179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.407214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.407402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.407437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.407643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.407674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.407898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.407935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.408104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.408140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.408311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.408343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.408537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.408572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.408754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.408787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.408988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.912 [2024-07-13 13:49:07.409020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.912 qpair failed and we were unable to recover it. 00:37:32.912 [2024-07-13 13:49:07.409219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.409255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.409458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.409493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.409668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.409700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.409905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.409941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.410164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.410199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.410383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.410414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.410642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.410677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.410843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.410895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.411068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.411100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.411248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.411285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.411449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.411484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.411683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.411714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.411896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.411934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.412121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.412156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.412311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.412342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.412543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.412578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.412801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.412836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.413048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.413080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.413227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.413258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.413444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.413480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.413651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.413683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.413853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.413895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.414060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.414093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.414232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.414264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.414460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.414496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.414658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.414693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.414921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.414955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.415133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.415169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.415389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.415423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.415624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.415660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.415859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.913 [2024-07-13 13:49:07.415916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.913 qpair failed and we were unable to recover it. 00:37:32.913 [2024-07-13 13:49:07.416109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.416144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.416343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.416374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.416537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.416572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.416756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.416792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.416984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.417016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.417206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.417241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.417429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.417463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.417637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.417670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.417891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.417926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.418145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.418181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.418374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.418406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.418604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.418639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.418844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.418890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.419094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.419127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.419313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.419345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.419550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.419582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.419753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.419785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.419988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.420025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.420243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.420278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.420499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.420532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.420703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.420739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.420965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.421002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.421181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.421212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.421441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.421477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.421640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.421675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.421894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.421943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.422142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.422194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.422422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.422457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.422636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.422667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.422894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.422930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.423111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.423146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.914 qpair failed and we were unable to recover it. 00:37:32.914 [2024-07-13 13:49:07.423369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.914 [2024-07-13 13:49:07.423401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.423634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.423665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.423862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.423905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.424092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.424123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.424331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.424366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.424582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.424618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.424795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.424826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.425007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.425043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.425274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.425309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.425492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.425523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.425680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.425712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.425886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.425918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.426136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.426168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.426394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.426429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.426615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.426649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.426843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.426882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.427106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.427143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.427334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.427368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.427561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.427594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.427741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.427773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.427962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.427998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.428226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.428258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.428452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.428487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.428681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.428716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.428918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.428951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.429144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.429179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.429383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.429415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.429569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.915 [2024-07-13 13:49:07.429602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.915 qpair failed and we were unable to recover it. 00:37:32.915 [2024-07-13 13:49:07.429834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.429875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.430085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.430121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.430298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.430330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.430498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.430530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.430748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.430795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.430989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.431022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.431207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.431258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.431424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.431459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.431633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.431665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.431898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.431934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.432095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.432130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.432331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.432362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.432589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.432625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.432845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.432887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.433096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.433128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.433325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.433360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.433565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.433601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.433779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.433810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.433986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.434018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.434235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.434275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.434478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.434511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.434682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.434717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.434956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.434989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.435170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.916 [2024-07-13 13:49:07.435202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.916 qpair failed and we were unable to recover it. 00:37:32.916 [2024-07-13 13:49:07.435427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.435462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.435618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.435652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.435834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.435874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.436036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.436071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.436264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.436300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.436495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.436526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.436699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.436730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.436886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.436920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.437125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.437157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.437360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.437395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.437581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.437616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.437824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.437855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.438047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.438083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.438276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.438311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.438508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.438540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.438768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.438803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.438985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.439020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.439223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.439256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.439422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.439457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.439645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.439680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.439905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.439938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.440111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.440160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.440363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.440398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.440606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.440638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.440881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.440914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.441117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.441167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.441362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.441394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.441568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.441602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.441776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.441812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.441989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.442021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.917 [2024-07-13 13:49:07.442180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.917 [2024-07-13 13:49:07.442212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.917 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.442408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.442443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.442648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.442679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.442894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.442931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.443123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.443159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.443354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.443390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.443581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.443617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.443815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.443850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.444056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.444087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.444312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.444347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.444531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.444565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.444775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.444808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.445018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.445051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.445224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.445258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.445433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.445495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.445718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.445753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.445939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.445975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.446218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.446250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.446443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.446479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.446649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.446685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.446906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.446939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.447120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.447152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.447348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.447384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.447542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.447573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.447773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.447809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.448020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.448052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.448257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.448289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.448520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.448555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.448772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.448807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.449040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.449072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.918 [2024-07-13 13:49:07.449265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.918 [2024-07-13 13:49:07.449300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.918 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.449460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.449497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.449759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.449794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.449996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.450028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.450206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.450239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.450479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.450510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.450699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.450734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.450919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.450955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.451161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.451192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.451364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.451396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.451608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.451639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.451817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.451848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.452034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.452067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.452256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.452287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.452490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.452521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.452745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.452785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.453011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.453048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.453228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.453259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.453461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.453509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.453701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.453736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.453933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.453965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.454156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.454192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.454375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.454409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.454583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.454616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.454779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.454815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.454994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.455026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.455228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.455259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.455455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.455490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.455662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.455696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.455931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.455963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.456159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.919 [2024-07-13 13:49:07.456194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.919 qpair failed and we were unable to recover it. 00:37:32.919 [2024-07-13 13:49:07.456368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.456403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.456599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.456631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.456794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.456829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.457066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.457098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.457275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.457306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.457526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.457561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.457767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.457799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.457944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.457977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.458151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.458182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.458358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.458390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.458578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.458610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.458754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.458786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.458931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.458963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.459145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.459179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.459362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.459394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.459572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.459604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.459774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.459805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.459981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.460025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.460253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.460289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.460494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.460526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.460707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.460742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.460937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.460973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.461149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.461181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.461333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.461381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.461572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.461612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.461805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.461842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.462038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.462070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.462265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.462300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.462465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.462496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.462717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.462752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.462977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.463009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.920 qpair failed and we were unable to recover it. 00:37:32.920 [2024-07-13 13:49:07.463178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.920 [2024-07-13 13:49:07.463209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.463407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.463442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.463615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.463647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.463822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.463853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.464012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.464045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.464245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.464281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.464456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.464488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.464636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.464667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.464862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.464916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.465114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.465145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.465349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.465384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.465600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.465635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.465809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.465840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.466023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.466055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.466252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.466287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.466507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.466539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.466763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.466798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.467005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.467041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.467238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.467270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.467468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.467503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.467699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.467741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.467939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.467971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.468193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.468229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.468451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.468483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.468663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.468694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.921 [2024-07-13 13:49:07.468938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.921 [2024-07-13 13:49:07.468971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.921 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.469152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.469184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.469356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.469388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.469580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.469615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.469813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.469848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.470029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.470061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.470207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.470239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.470430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.470465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.470661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.470692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.470872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.470904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.471101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.471137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.471323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.471354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.471550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.471585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.471782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.471817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.472015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.472047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.472272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.472307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.472533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.472569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.472740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.472771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.472956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.472991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.473180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.473216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.473412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.473444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.473634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.473669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.473863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.473916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.474113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.474145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.474305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.474340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.474498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.474544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.474745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.474777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.475000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.475036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.475221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.475255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.475433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.475464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.475630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.475661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.922 [2024-07-13 13:49:07.475836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.922 [2024-07-13 13:49:07.475881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.922 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.476081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.476113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.476280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.476314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.476526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.476561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.476807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.476846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.477074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.477105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.477344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.477375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.477552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.477583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.477804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.477839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.478077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.478113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.478279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.478310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.478490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.478521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.478722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.478757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.478954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.478987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.479182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.479217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.479438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.479469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.479622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.479654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.479880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.479915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.480107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.480142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.480310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.480341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.480558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.480593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.480788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.480822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.481026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.481058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.481251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.481282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.481456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.481490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.481685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.481716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.481887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.481922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.482109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.482144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.482335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.482367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.482561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.482596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.482759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.482794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.923 [2024-07-13 13:49:07.482985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.923 [2024-07-13 13:49:07.483018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.923 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.483223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.483258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.483424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.483460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.483684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.483715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.483907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.483943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.484110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.484145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.484314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.484345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.484537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.484572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.484742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.484777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.484966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.484997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.485175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.485207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.485393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.485429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.485632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.485663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.485862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.485918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.486135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.486169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.486344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.486375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.486569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.486604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.486797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.486833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.487047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.487080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.487228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.487260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.487430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.487480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.487679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.487710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.487877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.487912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.488110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.488142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.488338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.488369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.488520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.488551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.488742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.488777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.488952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.488994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.489210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.489245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.489442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.489477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.489666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.489697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.489891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.924 [2024-07-13 13:49:07.489927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.924 qpair failed and we were unable to recover it. 00:37:32.924 [2024-07-13 13:49:07.490101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.490137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.490359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.490390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.490616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.490651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.490824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.490859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.491074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.491105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.491273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.491308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.491509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.491543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.491766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.491798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.492002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.492034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.492202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.492237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.492434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.492466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.492675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.492710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.492896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.492931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.493153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.493185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.493379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.493414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.493632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.493667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.493862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.493904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.494072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.494104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.494256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.494305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.494499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.494530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.494745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.494780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.494971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.495012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.495183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.495214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.495373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.495405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.495580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.495619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.495824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.495857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.496045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.496080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.496244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.496276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.496429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.496472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.496675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.496711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.925 qpair failed and we were unable to recover it. 00:37:32.925 [2024-07-13 13:49:07.496932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.925 [2024-07-13 13:49:07.496965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.497114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.497145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.497335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.497370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.497563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.497599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.497774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.497807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.498009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.498042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.498204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.498239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.498464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.498496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.498664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.498699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.498880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.498917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.499083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.499115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.499260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.499291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.499444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.499492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.499727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.499762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.499951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.499983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.500177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.500212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.500386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.500418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.500575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.500606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.500803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.500838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.501043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.501076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.501269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.501304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.501495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.501530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.501730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.501761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.501959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.501995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.502144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.502179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.502385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.502417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.502611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.502647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.502864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.502906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.503116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.503148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.503370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.503417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.503615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.503650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.926 [2024-07-13 13:49:07.503846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.926 [2024-07-13 13:49:07.503887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.926 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.504084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.504133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.504331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.504366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.504530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.504563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.504768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.504804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.505030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.505065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.505286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.505318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.505527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.505562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.505758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.505793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.505960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.505992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.506218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.506253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.506419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.506456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.506651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.506683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.506880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.506917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.507122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.507158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.507353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.507384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.507578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.507613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.507783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.507818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.507996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.508028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.508229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.508264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.508452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.508486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.508682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.508713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.508882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.508933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.509134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.509183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.927 qpair failed and we were unable to recover it. 00:37:32.927 [2024-07-13 13:49:07.509392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.927 [2024-07-13 13:49:07.509423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.509599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.509630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.509831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.509863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.510081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.510113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.510309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.510340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.510516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.510551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.510752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.510783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.510981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.511016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.511180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.511215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.511412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.511443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.511669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.511704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.511894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.511936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.512126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.512158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.512331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.512367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.512567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.512602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.512775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.512806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.513002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.513039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.513193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.513224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.513398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.513429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.513574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.513607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.513754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.513804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.514020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.514053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.514292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.514325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.514534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.514581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.514778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.514809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.515023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.515056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.515239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.515275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.515477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.515509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.515686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.515718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.515901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.515933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.516138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.516170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.928 [2024-07-13 13:49:07.516364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.928 [2024-07-13 13:49:07.516399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.928 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.516619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.516654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.516852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.516890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.517133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.517164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.517365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.517413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.517639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.517670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.517889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.517927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.518127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.518174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.518376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.518407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.518610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.518644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.518827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.518862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.519031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.519062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.519219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.519251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.519450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.519499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.519721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.519753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.519943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.519979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.520172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.520207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.520384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.520417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.520614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.520649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.520874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.520909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.521112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.521143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.521296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.521328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.521553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.521587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.521784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.521815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.522032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.522065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.522269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.522309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.522511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.522543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.522705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.522740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.522932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.522968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.523189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.523220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.929 [2024-07-13 13:49:07.523362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.929 [2024-07-13 13:49:07.523412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.929 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.523625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.523659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.523876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.523926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.524105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.524136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.524343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.524377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.524569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.524601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.524793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.524827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.524993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.525029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.525224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.525256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.525453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.525488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.525707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.525748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.525972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.526004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.526198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.526233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.526397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.526431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.526659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.526690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.526902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.526934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.527112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.527143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.527329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.527361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.527583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.527618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.527798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.527833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.528042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.528074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.528269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.528304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.528572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.528607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.528830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.528862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.529054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.529085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.529304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.529338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.529542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.529573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.529778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.529812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.530040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.530075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.530298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.530329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.530515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.530550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.530716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.930 [2024-07-13 13:49:07.530751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.930 qpair failed and we were unable to recover it. 00:37:32.930 [2024-07-13 13:49:07.530977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.531009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.531237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.531268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.531427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.531458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.531707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.531743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.531973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.532009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.532275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.532310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.532530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.532561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.532730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.532765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.532959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.532992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.533165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.533207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.533405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.533440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.533635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.533670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.533862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.533910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.534129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.534177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.534399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.534434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.534600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.534632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.534834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.534871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.535137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.535173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.535427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.535458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.535679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.535713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.535912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.535948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.536168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.536199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.536395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.536430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.536625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.536656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.536851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.536889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.537052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.537087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.537277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.537312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.537537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.537570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.537794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.537829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.538026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.538062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.931 [2024-07-13 13:49:07.538256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.931 [2024-07-13 13:49:07.538288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.931 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.538483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.538518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.538712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.538747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.538938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.538970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.539161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.539195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.539388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.539423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.539598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.539631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.539874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.539910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.540081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.540116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.540286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.540318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.540511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.540546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.540747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.540782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.541008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.541040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.541213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.541253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.541477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.541508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.541681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.541713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.541894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.541931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.542199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.542234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.542411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.542442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.542587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.542618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.542840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.542882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.543109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.543141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.543340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.543374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.543572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.543603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.543801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.543837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.544037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.544068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.544294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.544329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.544555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.544587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.544790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.544825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.545048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.545083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.932 [2024-07-13 13:49:07.545309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.932 [2024-07-13 13:49:07.545341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.932 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.545510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.545545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.545776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.545808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.545993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.546025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.546213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.546248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.546442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.546477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.546676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.546707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.546984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.547019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.547203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.547238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.547451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.547483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.547670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.547702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.547844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.547880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.548079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.548111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.548337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.548383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.548554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.548589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.548812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.548843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.549081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.549116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.549308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.549343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.549517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.549549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.549715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.549751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.549914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.549950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.550122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.550155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.550350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.550386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.550568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.550608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.550776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.550808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.550985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.551017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.933 qpair failed and we were unable to recover it. 00:37:32.933 [2024-07-13 13:49:07.551215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.933 [2024-07-13 13:49:07.551263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.551486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.551518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.551752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.551783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.551938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.551971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.552144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.552176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.552412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.552447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.552636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.552671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.552889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.552938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.553138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.553186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.553389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.553420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.553620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.553651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.553820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.553855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.554079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.554114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.554310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.554342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.554515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.554547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.554738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.554773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.554968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.555000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.555215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.555251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.555446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.555478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.555652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.555683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.555851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.555919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.556143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.556175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.556362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.556393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.556594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.556629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.556821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.556856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.557048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.557079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.557249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.557281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.557458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.557489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.557658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.557690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.557853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.557905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.558073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.558108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.558336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.558368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.558554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.558586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.934 qpair failed and we were unable to recover it. 00:37:32.934 [2024-07-13 13:49:07.558734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.934 [2024-07-13 13:49:07.558765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.558960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.558993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.559163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.559198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.559367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.559407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.559608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.559644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.559798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.559829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.560005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.560037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.560220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.560252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.560442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.560477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.560674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.560708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.560901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.560933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.561128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.561163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.561319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.561353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.561551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.561583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.561745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.561777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.561962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.561995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.562242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.562273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.562465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.562500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.562728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.562775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.562971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.563003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.563178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.563210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.563374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.563409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.563579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.563611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.563829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.563871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.564090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.564125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.564317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.564349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.564560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.564592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.564735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.564767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.564964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.564996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.565199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.935 [2024-07-13 13:49:07.565234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.935 qpair failed and we were unable to recover it. 00:37:32.935 [2024-07-13 13:49:07.565394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.565430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.565660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.565692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.565935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.565968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.566181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.566216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.566412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.566443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.566665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.566700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.566880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.566916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.567117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.567149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.567365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.567400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.567592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.567627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.567817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.567848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.568072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.568107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.568271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.568306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.568526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.568558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.568757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.568797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.568999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.569031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.569189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.569220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.569391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.569428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.569619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.569654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.569827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.569858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.570078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.570110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.570252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.570299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.570521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.570552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.570703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.570734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.570932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.570968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.571168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.571200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.571391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.571426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.571589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.571625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.571829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.571861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.572085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.572120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.572305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.572340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.936 [2024-07-13 13:49:07.572556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.936 [2024-07-13 13:49:07.572588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.936 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.572759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.572794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.572959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.572994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.573213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.573245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.573439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.573474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.573636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.573671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.573890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.573922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.574118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.574153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.574365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.574400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.574572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.574603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.574783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.574816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.575012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.575048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.575288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.575320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.575547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.575582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.575775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.575810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.575990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.576022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.576221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.576256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.576446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.576482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.576672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.576704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.576876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.576926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.577094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.577126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.577364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.577406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.577636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.577671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.577887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.577928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.578133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.578164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.578362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.578397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.578590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.578624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.578827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.578862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.579068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.579103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.579262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.937 [2024-07-13 13:49:07.579297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.937 qpair failed and we were unable to recover it. 00:37:32.937 [2024-07-13 13:49:07.579473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.579505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.579676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.579708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.579880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.579912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.580059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.580090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.580281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.580316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.580513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.580545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.580717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.580748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.580892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.580924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.581098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.581129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.581355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.581387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.581581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.581616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.581813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.581849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.582023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.582055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.582213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.582244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.582413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.582445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.582643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.582674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.582872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.582922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.583101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.583133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.583314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.583345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.583507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.583542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.583737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.583777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.583951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.583983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.584172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.938 [2024-07-13 13:49:07.584207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.938 qpair failed and we were unable to recover it. 00:37:32.938 [2024-07-13 13:49:07.584372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.584412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.584575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.584606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.584826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.584861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.585075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.585106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.585305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.585336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.585481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.585513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.585730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.585765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.585965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.585998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.586194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.586229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.586415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.586450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.586667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.586698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.586899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.586935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.587155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.587189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.587408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.587440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.587617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.587648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.587815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.587850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.588076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.588108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.588297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.588332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.588490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.588525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.588722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.588753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.588912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.588948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.589115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.589151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.589328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.589360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.589557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.589592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.589760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.589795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.589960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.589992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.590213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.590248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.590444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.590478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.590682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.590714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.590882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.590918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.939 [2024-07-13 13:49:07.591109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.939 [2024-07-13 13:49:07.591144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.939 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.591366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.591398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.591602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.591649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.591880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.591915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.592142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.592174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.592398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.592434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.592600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.592635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.592855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.592906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.593068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.593099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.593300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.593335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.593552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.593583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.593810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.593845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.594050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.594082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.594279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.594311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.594533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.594568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.594737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.594771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.594968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.595001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.595184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.595215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.595393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.595425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.595628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.595660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.595843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.595886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.596097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.596132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.596331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.596362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.596522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.596557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.596753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.596788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.596974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.597007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.597194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.597229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.597463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.597495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.597664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.597695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.597891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.597928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.598137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.598169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.598371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.598402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.598602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.598637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.598830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.940 [2024-07-13 13:49:07.598874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.940 qpair failed and we were unable to recover it. 00:37:32.940 [2024-07-13 13:49:07.599059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.599091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.599310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.599345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.599536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.599571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.599745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.599777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.599942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.599978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.600143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.600178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.600373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.600405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.600596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.600632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.600824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.600860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.601041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.601073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.601240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.601272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.601463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.601497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.601686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.601717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.601895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.601932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.602090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.602125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.602326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.602359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.602548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.602582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.602767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.602802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.602982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.603014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.603157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.603188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.603405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.603440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.603677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.603709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.603879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.603914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.604131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.604166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.604369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.604400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.604545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.604576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.604755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.604786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.604985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.605017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.605194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.605227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.605369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.605400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.605576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.605608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.605803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.605839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.606076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.606118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.606324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.606356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.606568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.606603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.606836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.941 [2024-07-13 13:49:07.606876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.941 qpair failed and we were unable to recover it. 00:37:32.941 [2024-07-13 13:49:07.607050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.607082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.607270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.607306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.607522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.607557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.607718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.607749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.607973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.608010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.608180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.608215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.608434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.608465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.608653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.608688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.608853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.608895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.609051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.609083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.609230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.609282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.609502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.609534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.609701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.609733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.609900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.609936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.610187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.610220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.610422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.610453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.610644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.610680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.610882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.610919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.611104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.611135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.611330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.611365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.611602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.611634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.611819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.611854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.612042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.612074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.612275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.612310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.612531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.612563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.612792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.612827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.613025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.613060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.613281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.613313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.613513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.613554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.613771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.613806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.613991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.614023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.614223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.614259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.614447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.614483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.614679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.614712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.614888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.614924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.615117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.615153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.615378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.615409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.615604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.942 [2024-07-13 13:49:07.615640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.942 qpair failed and we were unable to recover it. 00:37:32.942 [2024-07-13 13:49:07.615799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.615835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.616037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.616070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.616299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.616334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.616533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.616568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.616784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.616819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.617024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.617056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.617283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.617318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.617487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.617518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.617737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.617772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.617936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.617972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.618146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.618177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.618401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.618437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.618595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.618630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.618827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.618858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.619043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.619075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.619224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.619256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.619426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.619457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.619616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.619653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.619844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.619886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.620108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.620145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.620341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.620376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.620556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.620591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.620791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.620833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.621019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.621058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.621286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.621319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.621499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.621530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.621677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.621709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.621892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.621930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.622125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.622157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.943 [2024-07-13 13:49:07.622333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.943 [2024-07-13 13:49:07.622364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.943 qpair failed and we were unable to recover it. 00:37:32.944 [2024-07-13 13:49:07.622585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.944 [2024-07-13 13:49:07.622620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.944 qpair failed and we were unable to recover it. 00:37:32.944 [2024-07-13 13:49:07.622842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.944 [2024-07-13 13:49:07.622894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.944 qpair failed and we were unable to recover it. 00:37:32.944 [2024-07-13 13:49:07.623063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.944 [2024-07-13 13:49:07.623095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.944 qpair failed and we were unable to recover it. 00:37:32.944 [2024-07-13 13:49:07.623266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.944 [2024-07-13 13:49:07.623301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.944 qpair failed and we were unable to recover it. 00:37:32.944 [2024-07-13 13:49:07.623472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.944 [2024-07-13 13:49:07.623503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.944 qpair failed and we were unable to recover it. 00:37:32.944 [2024-07-13 13:49:07.623678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.944 [2024-07-13 13:49:07.623710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.944 qpair failed and we were unable to recover it. 00:37:32.944 [2024-07-13 13:49:07.623891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.944 [2024-07-13 13:49:07.623941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.944 qpair failed and we were unable to recover it. 00:37:32.944 [2024-07-13 13:49:07.624097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.944 [2024-07-13 13:49:07.624128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.944 qpair failed and we were unable to recover it. 00:37:32.944 [2024-07-13 13:49:07.624324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.944 [2024-07-13 13:49:07.624358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.944 qpair failed and we were unable to recover it. 00:37:32.944 [2024-07-13 13:49:07.624551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.944 [2024-07-13 13:49:07.624587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.944 qpair failed and we were unable to recover it. 00:37:32.944 [2024-07-13 13:49:07.624831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.944 [2024-07-13 13:49:07.624869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.944 qpair failed and we were unable to recover it. 00:37:32.944 [2024-07-13 13:49:07.625018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:32.944 [2024-07-13 13:49:07.625050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:32.944 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.625239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.625274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.625479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.625511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.625677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.625708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.625880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.625912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.626078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.626110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.626298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.626330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.626475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.626508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.626684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.626725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.626913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.626946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.627094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.627126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.627303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.627337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.627486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.627528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.627705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.627755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.627988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.628022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.628235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.628268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.628427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.628462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.628637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.628668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.628877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.628919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.629077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.629114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.629311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.629347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.629564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.629600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.629824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.629861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.630104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.222 [2024-07-13 13:49:07.630135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.222 qpair failed and we were unable to recover it. 00:37:33.222 [2024-07-13 13:49:07.630339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.630374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.630545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.630580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.630796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.630828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.631006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.631039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.631233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.631268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.631442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.631473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.631634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.631669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.631876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.631926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.632107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.632138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.632307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.632342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.632564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.632599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.632801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.632832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.633008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.633041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.633256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.633291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.633498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.633530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.633703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.633738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.633938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.633972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.634145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.634176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.634413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.634450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.634651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.634683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.634886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.634920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.635157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.635204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.635370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.635405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.635623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.635654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.635808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.635843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.636067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.636102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.636325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.636357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.636546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.636582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.636805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.636839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.637045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.637077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.637226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.637258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.637449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.637484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.223 [2024-07-13 13:49:07.637679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.223 [2024-07-13 13:49:07.637710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.223 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.637909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.637945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.638136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.638176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.638370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.638401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.638626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.638660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.638819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.638854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.639059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.639091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.639319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.639354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.639547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.639583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.639803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.639834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.639988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.640021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.640180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.640215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.640407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.640439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.640633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.640668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.640857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.640895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.641072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.641103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.641343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.641375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.641570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.641604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.641800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.641831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.642034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.642070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.642236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.642271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.642494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.642526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.642747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.642781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.643000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.643035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.643264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.643295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.643495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.643530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.643711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.643747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.643935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.643967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.644180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.644212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.644414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.644449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.644623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.644655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.644879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.644915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.645132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.645167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.224 [2024-07-13 13:49:07.645369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.224 [2024-07-13 13:49:07.645400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.224 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.645618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.645653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.645822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.645857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.646038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.646070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.646249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.646280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.646437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.646468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.646642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.646673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.646850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.646889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.647110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.647145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.647349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.647385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.647567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.647599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.647786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.647821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.648004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.648036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.648216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.648247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.648422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.648457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.648622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.648655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.648850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.648891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.649068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.649105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.649283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.649314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.649531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.649566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.649762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.649811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.650011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.650043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.650246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.650281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.650480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.650515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.650682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.650713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.650901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.650937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.651121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.651156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.651357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.651388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.651583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.651618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.651834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.651875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.652049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.652081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.652259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.652291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.225 [2024-07-13 13:49:07.652535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.225 [2024-07-13 13:49:07.652567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.225 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.652732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.652764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.652956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.652992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.653187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.653221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.653394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.653425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.653624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.653656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.653835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.653879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.654028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.654060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.654259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.654307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.654481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.654516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.654722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.654753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.654945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.654982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.655142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.655177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.655377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.655408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.655556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.655587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.655780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.655815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.656010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.656042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.656187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.656223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.656415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.656451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.656612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.656643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.656839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.656881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.657069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.657101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.657302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.657333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.657495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.657530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.657701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.657735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.657968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.658001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.658191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.658226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.658414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.658450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.658629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.658661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.658852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.658894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.226 [2024-07-13 13:49:07.659117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.226 [2024-07-13 13:49:07.659152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.226 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.659352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.659384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.659549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.659583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.659766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.659801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.660020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.660052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.660272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.660307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.660516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.660548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.660723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.660755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.660929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.660961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.661157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.661192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.661417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.661449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.661617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.661652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.661892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.661928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.662145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.662177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.662380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.662415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.662606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.662641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.662804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.662836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.662992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.663025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.663225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.663259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.663469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.663501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.663681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.227 [2024-07-13 13:49:07.663716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.227 qpair failed and we were unable to recover it. 00:37:33.227 [2024-07-13 13:49:07.663919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.663951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.664105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.664147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.664315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.664351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.664562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.664596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.664767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.664798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.664961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.664997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.665158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.665199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.665405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.665436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.665606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.665641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.665861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.665903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.666106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.666138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.666291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.666322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.666495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.666527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.666693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.666724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.666887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.666923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.667118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.667155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.667352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.667384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.667571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.667606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.667811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.667843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.668028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.668061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.668258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.668293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.668449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.668484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.668686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.668718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.668985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.669021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.669258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.669290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.669457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.669489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.669703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.669738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.669904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.669939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.670144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.228 [2024-07-13 13:49:07.670176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.228 qpair failed and we were unable to recover it. 00:37:33.228 [2024-07-13 13:49:07.670375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.670412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.670574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.670608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.670769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.670801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.670968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.671005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.671196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.671232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.671419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.671450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.671671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.671706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.671963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.671996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.672195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.672227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.672418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.672453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.672664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.672699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.672878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.672911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.673073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.673107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.673273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.673308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.673535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.673567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.673757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.673792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.674006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.674042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.674203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.674238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.674465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.674500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.674691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.674726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.674931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.674963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.675160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.675195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.675420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.675455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.675630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.675661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.675819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.675854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.676062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.676099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.676299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.676330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.676506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.676538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.676729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.676764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.229 qpair failed and we were unable to recover it. 00:37:33.229 [2024-07-13 13:49:07.676945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.229 [2024-07-13 13:49:07.676977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.677149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.677186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.677412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.677447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.677679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.677711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.677892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.677929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.678135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.678167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.678340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.678372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.678541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.678608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.678804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.678840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.679016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.679048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.679229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.679261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.679461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.679496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.679662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.679694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.679893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.679930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.680146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.680182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.680408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.680440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.680637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.680672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.680892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.680928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.681102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.681133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.681354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.681389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.681588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.681620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.681846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.681888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.682119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.682167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.682329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.682365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.682534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.682566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.682821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.682858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.683064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.683100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.683325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.683357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.683503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.683539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.230 [2024-07-13 13:49:07.683729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.230 [2024-07-13 13:49:07.683764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.230 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.683958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.683991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.684163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.684200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.684419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.684454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.684658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.684690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.684870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.684903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.685047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.685079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.685252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.685283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.685460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.685492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.685702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.685733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.685890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.685923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.686121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.686170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.686381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.686413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.686587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.686619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.686787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.686819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.686970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.687002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.687200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.687232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.687454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.687490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.687660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.687697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.687931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.687963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.688182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.688217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.688419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.688454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.688675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.688707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.688908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.688944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.689113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.689148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.689366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.689397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.689599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.689635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.689822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.689857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.690025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.690057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.690275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.690325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.690571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.690606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.690833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.690871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.231 qpair failed and we were unable to recover it. 00:37:33.231 [2024-07-13 13:49:07.691089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.231 [2024-07-13 13:49:07.691132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.691326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.691363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.691559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.691591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.691734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.691766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.691911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.691944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.692129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.692162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.692340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.692373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.692571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.692611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.692836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.692880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.693076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.693111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.693274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.693320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.693560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.693592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.693797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.693833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.694045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.694082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.694300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.694332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.694540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.694576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.694729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.694765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.694938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.694970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.695120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.695152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.695354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.695403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.695612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.695644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.695883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.695919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.696135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.696170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.696371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.696402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.696574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.696606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.696808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.696855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.697063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.697094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.697249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.697282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.697432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.697482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.697704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.697736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.697912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.232 [2024-07-13 13:49:07.697944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.232 qpair failed and we were unable to recover it. 00:37:33.232 [2024-07-13 13:49:07.698094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.698126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.698339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.698371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.698543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.698575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.698788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.698823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.699031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.699063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.699233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.699268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.699422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.699458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.699632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.699664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.699859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.699912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.700111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.700148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.700352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.700384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.700553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.700585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.700760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.700791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.700986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.701019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.701206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.701241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.701420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.701451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.701622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.701654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.701810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.701842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.702057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.702108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.702313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.702345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.702544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.702579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.702738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.702773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.233 qpair failed and we were unable to recover it. 00:37:33.233 [2024-07-13 13:49:07.702970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.233 [2024-07-13 13:49:07.703003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.703173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.703208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.703375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.703411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.703607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.703639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.703824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.703860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.704027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.704062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.704284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.704315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.704466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.704498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.704643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.704675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.704924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.704957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.705133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.705182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.705374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.705409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.705606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.705637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.705813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.705844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.706081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.706116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.706296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.706327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.706470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.706521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.706740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.706776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.706977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.707010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.707179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.707210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.707402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.707438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.707635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.707681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.707879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.707915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.708123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.708159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.708326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.708357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.708531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.708562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.708757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.708794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.708991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.709023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.709212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.709247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.709442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.709478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.709677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.709709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.709887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.234 [2024-07-13 13:49:07.709919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.234 qpair failed and we were unable to recover it. 00:37:33.234 [2024-07-13 13:49:07.710094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.710126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.710307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.710338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.710534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.710569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.710734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.710769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.710939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.710971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.711110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.711161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.711346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.711381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.711573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.711605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.711800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.711835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.712006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.712038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.712233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.712264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.712420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.712455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.712640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.712675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.712861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.712899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.713095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.713130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.713286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.713321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.713520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.713551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.713743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.713778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.713977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.714009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.714146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.714177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.714316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.714366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.714556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.714591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.714780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.714812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.714987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.715019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.715218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.715253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.715476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.715508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.715696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.715731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.715899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.715935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.716137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.716170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.716395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.716435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.235 qpair failed and we were unable to recover it. 00:37:33.235 [2024-07-13 13:49:07.716638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.235 [2024-07-13 13:49:07.716673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.716896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.716928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.717079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.717111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.717267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.717298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.717469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.717501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.717730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.717765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.717952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.717988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.718188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.718220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.718391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.718442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.718637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.718672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.718936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.718968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.719138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.719188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.719416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.719451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.719627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.719658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.719856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.719898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.720094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.720129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.720329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.720361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.720524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.720559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.720775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.720810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.721023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.721056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.721212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.721244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.721419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.721450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.721787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.721843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.722056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.722098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.722352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.722384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.722554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.722586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.722813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.722848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.723054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.723086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.723290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.236 [2024-07-13 13:49:07.723321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.236 qpair failed and we were unable to recover it. 00:37:33.236 [2024-07-13 13:49:07.723527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.723559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.723735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.723767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.724027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.724060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.724286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.724321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.724512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.724547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.724723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.724754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.724954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.724990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.725201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.725232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.725444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.725476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.725667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.725702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.725908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.725946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.726152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.726183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.726375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.726411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.726583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.726618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.726857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.726915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.727117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.727166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.727332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.727367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.727568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.727599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.727820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.727855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.728062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.728098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.728295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.728327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.728506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.728537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.728768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.728804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.729006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.729039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.729220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.729252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.729500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.729532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.729708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.729740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.729893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.729926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.730118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.237 [2024-07-13 13:49:07.730154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.237 qpair failed and we were unable to recover it. 00:37:33.237 [2024-07-13 13:49:07.730348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.730380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.730594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.730629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.730820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.730856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.731049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.731080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.731272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.731308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.731500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.731535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.731729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.731760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.731933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.731970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.732192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.732227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.732429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.732462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.732663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.732698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.732915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.732947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.733148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.733180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.733347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.733378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.733575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.733610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.733787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.733836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.734039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.734071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.734264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.734299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.734521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.734552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.734746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.734781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.734949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.734985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.735183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.735219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.735398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.735433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.735622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.735657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.735864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.735903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.736103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.736139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.736371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.736403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.736609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.736640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.736841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.736885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.238 [2024-07-13 13:49:07.737080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.238 [2024-07-13 13:49:07.737126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.238 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.737303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.737335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.737514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.737545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.737750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.737785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.737975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.738007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.738172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.738207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.738430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.738465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.738688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.738720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.738949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.738984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.739166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.739201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.739366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.739398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.739588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.739623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.739779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.739814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.740004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.740036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.740206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.740257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.740449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.740484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.740687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.740719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.740875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.740908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.741135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.741171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.741398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.741430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.741596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.741631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.741826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.741861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.742065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.742097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.742310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.742344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.742512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.742548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.742739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.742770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.239 qpair failed and we were unable to recover it. 00:37:33.239 [2024-07-13 13:49:07.742989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.239 [2024-07-13 13:49:07.743024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.743218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.743253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.743424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.743455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.743604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.743635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.743812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.743843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.744061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.744093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.744230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.744266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.744449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.744481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.744649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.744681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.744854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.744912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.745090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.745123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.745303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.745334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.745553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.745588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.745802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.745834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.746016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.746049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.746244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.746280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.746478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.746510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.746683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.746716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.746936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.746972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.747176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.747211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.747395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.747427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.747662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.747697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.747894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.747926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.748105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.748136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.748358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.748393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.748585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.748620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.748814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.748846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.749055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.749090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.749306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.749341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.749556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.749587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.240 qpair failed and we were unable to recover it. 00:37:33.240 [2024-07-13 13:49:07.749816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.240 [2024-07-13 13:49:07.749848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.749996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.750027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.750230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.750262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.750470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.750505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.750704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.750739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.750939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.750971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.751196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.751232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.751428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.751463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.751686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.751727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.751902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.751934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.752103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.752135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.752306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.752338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.752509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.752540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.752766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.752801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.753028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.753060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.753234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.753269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.753467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.753507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.753764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.753799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.753991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.754023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.754220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.754255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.754425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.754456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.754628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.754660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.754886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.754918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.755119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.755150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.755347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.755382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.755575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.755610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.755777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.755809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.756011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.756044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.756222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.756254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.756428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.756460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.241 qpair failed and we were unable to recover it. 00:37:33.241 [2024-07-13 13:49:07.756690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.241 [2024-07-13 13:49:07.756725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.756884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.756920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.757123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.757154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.757343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.757378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.757606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.757638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.757818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.757849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.758030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.758062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.758281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.758316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.758489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.758526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.758672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.758722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.758918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.758954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.759123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.759155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.759300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.759332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.759556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.759592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.759795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.759830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.760055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.760087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.760259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.760294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.760490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.760521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.760719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.760754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.760967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.761003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.761204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.761237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.761462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.761497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.761661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.761696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.761859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.761897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.762077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.762109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.762308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.762340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.762557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.762593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.762744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.762775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.762970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.763006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.763184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.763216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.763412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.763446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.242 [2024-07-13 13:49:07.763643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.242 [2024-07-13 13:49:07.763674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.242 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.763845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.763882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.764104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.764139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.764308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.764343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.764538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.764569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.764723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.764754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.764944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.764980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.765172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.765203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.765419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.765454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.765656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.765692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.765886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.765918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.766082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.766129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.766345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.766381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.766550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.766581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.766763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.766794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.766949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.766981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.767119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.767150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.767334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.767369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.767537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.767572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.767783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.767818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.768039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.768071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.768236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.768271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.768496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.768528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.768719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.768754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.768916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.768952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.769144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.769176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.769368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.769402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.769565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.769600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.769773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.769804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.770039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.770072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.770277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.770313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.770538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.243 [2024-07-13 13:49:07.770570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.243 qpair failed and we were unable to recover it. 00:37:33.243 [2024-07-13 13:49:07.770752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.770787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.770970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.771007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.771184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.771215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.771407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.771448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.771644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.771675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.771844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.771882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.772021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.772052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.772265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.772300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.772518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.772550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.772782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.772817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.772980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.773015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.773215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.773247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.773419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.773454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.773644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.773679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.773880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.773912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.774088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.774123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.774343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.774375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.774553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.774585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.774822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.774857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.775091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.775127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.775343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.775375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.775570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.775605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.775818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.775853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.776057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.776089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.776308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.776343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.776565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.776597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.776824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.776858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.777051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.777082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.777276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.777310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.777505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.777537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.777742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.777777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.777974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.244 [2024-07-13 13:49:07.778009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.244 qpair failed and we were unable to recover it. 00:37:33.244 [2024-07-13 13:49:07.778233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.778265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.778433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.778467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.778683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.778718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.778912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.778944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.779102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.779138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.779298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.779333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.779521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.779553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.779736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.779768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.779902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.779952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.780125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.780156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.780325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.780356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.780543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.780593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.780812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.780843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.781025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.781060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.781236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.781268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.781466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.781499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.781691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.781726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.781945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.781981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.782144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.782176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.782318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.782349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.782573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.782608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.782778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.782810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.782991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.783023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.783224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.783259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.783454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.783485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.783680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.783715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.245 [2024-07-13 13:49:07.783906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.245 [2024-07-13 13:49:07.783942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.245 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.784144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.784175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.784394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.784429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.784663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.784699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.784984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.785016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.785191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.785223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.785403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.785435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.785647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.785679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.785876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.785913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.786137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.786173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.786376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.786408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.786607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.786642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.786885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.786918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.787103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.787135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.787285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.787317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.787492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.787534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.787701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.787733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.787929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.787965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.788165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.788197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.788374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.788406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.788553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.788585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.788751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.788783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.789016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.789049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.789226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.789258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.789456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.789492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.789680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.789716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.789862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.789899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.790041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.790072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.790274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.790306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.790480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.790515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.246 [2024-07-13 13:49:07.790669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.246 [2024-07-13 13:49:07.790704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.246 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.790935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.790968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.791188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.791224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.791385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.791420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.791646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.791677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.791850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.791895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.792062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.792097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.792281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.792312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.792509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.792544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.792767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.792799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.792997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.793030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.793195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.793230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.793431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.793462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.793660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.793692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.793909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.793944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.794146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.794178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.794352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.794383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.794605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.794640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.794810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.794845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.795080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.795122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.795317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.795353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.795570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.795605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.795811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.795844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.796080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.796115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.796302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.796338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.796542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.796574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.796743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.796777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.796973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.797009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.797198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.797230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.797421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.797456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.797644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.797679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.247 [2024-07-13 13:49:07.797896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.247 [2024-07-13 13:49:07.797928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.247 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.798077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.798108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.798284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.798316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.798486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.798517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.798693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.798729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.798967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.798999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.799198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.799232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.799429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.799464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.799659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.799694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.799881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.799936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.800109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.800157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.800341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.800376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.800576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.800607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.800802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.800837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.801064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.801099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.801292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.801323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.801492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.801529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.801722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.801756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.801935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.801967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.802160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.802195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.802400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.802435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.802611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.802644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.802833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.802875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.803038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.803073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.803269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.803301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.803519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.803554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.803747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.803782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.804000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.804033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.804217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.804248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.804453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.248 [2024-07-13 13:49:07.804488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.248 qpair failed and we were unable to recover it. 00:37:33.248 [2024-07-13 13:49:07.804681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.804713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.804940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.804976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.805193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.805227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.805394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.805426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.805623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.805671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.805889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.805925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.806150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.806181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.806381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.806418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.806583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.806619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.806798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.806829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.807009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.807041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.807239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.807275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.807435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.807466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.807662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.807697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.807924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.807961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.808130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.808161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.808355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.808391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.808603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.808635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.808810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.808841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.809044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.809079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.809265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.809300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.809521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.809553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.809748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.809796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.809958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.809994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.810189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.810220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.810381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.810416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.810604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.810639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.810825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.810857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.811033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.811068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.811231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.811266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.811456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.811487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.811671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.811707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.811901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.811932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.812079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.812111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.249 [2024-07-13 13:49:07.812306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.249 [2024-07-13 13:49:07.812341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.249 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.812538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.812570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.812743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.812775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.812958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.812990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.813212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.813247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.813442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.813473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.813706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.813742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.813965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.814001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.814196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.814228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.814430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.814465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.814629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.814664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.814825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.814856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.815041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.815077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.815259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.815294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.815490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.815522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.815693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.815728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.815933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.815965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.816131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.816168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.816365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.816400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.816602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.816633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.816815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.816847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.817059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.817094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.817276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.817311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.250 qpair failed and we were unable to recover it. 00:37:33.250 [2024-07-13 13:49:07.817510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.250 [2024-07-13 13:49:07.817542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.817737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.817773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.817938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.817973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.818141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.818172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.818361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.818397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.818557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.818592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.818789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.818820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.819028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.819060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.819230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.819261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.819429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.819461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.819680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.819715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.819908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.819944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.820167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.820198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.820358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.820393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.820576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.820611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.820810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.820842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.821040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.821075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.821247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.821282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.821454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.821486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.821625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.821671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.821897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.821933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.822156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.822188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.822380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.822415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.822593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.822628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.822801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.822836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.823031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.823063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.823228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.823263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.823460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.823491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.823699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.823734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.823919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.823961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.824175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.251 [2024-07-13 13:49:07.824207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.251 qpair failed and we were unable to recover it. 00:37:33.251 [2024-07-13 13:49:07.824403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.824438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.824616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.824648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.824817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.824848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.825053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.825088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.825294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.825325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.825498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.825529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.825718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.825754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.825920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.825955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.826154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.826185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.826360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.826410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.826599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.826634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.826856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.826894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.827127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.827161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.827376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.827411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.827612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.827644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.827834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.827874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.828085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.828122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.828319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.828351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.828525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.828556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.828749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.828784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.828994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.829026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.829214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.829249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.829423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.829459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.829640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.829672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.829870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.829906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.830072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.830107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.830279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.830312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.830485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.830516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.830716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.830747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.830967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.252 [2024-07-13 13:49:07.830999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.252 qpair failed and we were unable to recover it. 00:37:33.252 [2024-07-13 13:49:07.831170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.831202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.831341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.831373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.831544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.831576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.831771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.831810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.832008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.832041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.832240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.832272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.832443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.832474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.832657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.832689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.832864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.832901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.833086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.833121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.833341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.833373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.833572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.833603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.833803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.833838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.834048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.834080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.834283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.834314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.834540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.834575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.834738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.834772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.834953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.834985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.835177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.835213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.835412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.835447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.835643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.835675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.835839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.835880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.836045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.836080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.836277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.836308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.836456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.836487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.836712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.836747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.836964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.836996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.837182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.837217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.837411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.837446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.837625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.837657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.837857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.837898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.838105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.253 [2024-07-13 13:49:07.838137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.253 qpair failed and we were unable to recover it. 00:37:33.253 [2024-07-13 13:49:07.838278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.838320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.838518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.838553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.838753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.838788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.838986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.839020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.839218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.839253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.839468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.839502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.839698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.839730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.839904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.839936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.840160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.840195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.840361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.840392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.840584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.840619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.840781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.840821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.841002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.841034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.841208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.841239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.841408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.841439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.841618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.841649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.841799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.841830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.841980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.842012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.842211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.842244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.842444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.842479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.842640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.842677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.842842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.842880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.843051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.843082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.843243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.843278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.843498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.843529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.843735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.843772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.844008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.844040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.844225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.844257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.844437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.844469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.844665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.844706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.844905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.844938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.254 [2024-07-13 13:49:07.845131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.254 [2024-07-13 13:49:07.845166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.254 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.845380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.845411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.845621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.845652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.845845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.845886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.846058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.846094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.846287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.846318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.846470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.846501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.846681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.846712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.846893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.846926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.847101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.847133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.847324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.847359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.847530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.847562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.847759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.847790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.847936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.847969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.848145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.848177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.848322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.848353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.848543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.848579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.848776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.848807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.848984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.849016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.849239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.849274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.849499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.849534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.849730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.849765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.849952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.849984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.850157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.850188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.850383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.850419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.850615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.850650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.850812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.850844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.851018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.851053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.851240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.851274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.851440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.851472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.851675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.851712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.851879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.851915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.255 qpair failed and we were unable to recover it. 00:37:33.255 [2024-07-13 13:49:07.852084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.255 [2024-07-13 13:49:07.852115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.852313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.852359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.852583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.852617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.852823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.852854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.853056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.853091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.853307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.853343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.853571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.853602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.853814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.853846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.853991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.854040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.854236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.854267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.854471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.854505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.854698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.854734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.854896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.854928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.855110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.855145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.855308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.855343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.855548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.855579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.855752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.855783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.855989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.856025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.856203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.856236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.856431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.856466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.856632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.856667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.856886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.856919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.857123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.857155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.256 [2024-07-13 13:49:07.857300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.256 [2024-07-13 13:49:07.857332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.256 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.857528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.857560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.857757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.857792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.857974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.858010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.858199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.858230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.858414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.858457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.858652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.858687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.858945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.858977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.859163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.859198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.859420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.859452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.859602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.859633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.859804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.859835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.860016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.860051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.860227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.860259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.860451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.860486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.860704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.860736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.860932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.860965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.861160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.861196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.861379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.861414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.861640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.861672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.861879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.861915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.862135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.862171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.862339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.862371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.862539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.862571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.862707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.862756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.862926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.862958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.863173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.863209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.863371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.863406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.863605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.863636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.863852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.863896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.864053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.864089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.864291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.864322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.864496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.864531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.257 [2024-07-13 13:49:07.864744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.257 [2024-07-13 13:49:07.864779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.257 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.865000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.865032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.865179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.865211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.865389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.865420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.865564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.865596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.865787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.865822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.866057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.866093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.866290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.866322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.866546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.866581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.866800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.866846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.867087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.867118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.867333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.867364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.867537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.867572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.867745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.867776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.867938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.867974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.868141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.868177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.868370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.868402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.868560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.868593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.868730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.868777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.868982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.869014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.869173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.869208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.869389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.869423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.869617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.869649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.869824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.869855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.870064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.870098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.870296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.870327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.870501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.870536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.870700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.870735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.870906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.870938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.871106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.258 [2024-07-13 13:49:07.871154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.258 qpair failed and we were unable to recover it. 00:37:33.258 [2024-07-13 13:49:07.871320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.871355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.871521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.871552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.871717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.871752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.871944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.871976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.872121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.872152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.872343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.872377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.872568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.872603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.872809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.872841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.873022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.873058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.873246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.873282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.873479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.873511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.873703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.873737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.873896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.873932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.874134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.874166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.874359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.874394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.874613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.874647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.874842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.874882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.875032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.875064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.875293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.875328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.875531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.875562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.875753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.875787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.875982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.876018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.876207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.876244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.876479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.876514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.876676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.876711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.876911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.876944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.877144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.877179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.877364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.877399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.877621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.877652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.877815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.877849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.878054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.878086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.878311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.878347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.878537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.259 [2024-07-13 13:49:07.878573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.259 qpair failed and we were unable to recover it. 00:37:33.259 [2024-07-13 13:49:07.878790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.878825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.879025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.879057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.879208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.879240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.879444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.879476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.879652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.879684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.879830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.879861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.880059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.880090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.880315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.880351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.880511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.880546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.880714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.880745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.880935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.880984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.881142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.881178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.881405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.881436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.881641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.881672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.881841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.881878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.882091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.882123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.882323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.882359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.882577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.882608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.882784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.882815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.882980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.883013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.883190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.883222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.883407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.883439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.883628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.883663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.883820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.883855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.884074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.884106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.884258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.884290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.884437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.884486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.884665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.884696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.884875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.884916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.885090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.885133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.260 qpair failed and we were unable to recover it. 00:37:33.260 [2024-07-13 13:49:07.885330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.260 [2024-07-13 13:49:07.885361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.885527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.885580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.885767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.885802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.885981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.886013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.886195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.886227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.886425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.886471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.886638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.886669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.886862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.886900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.887108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.887143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.887341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.887372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.887523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.887554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.887729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.887760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.887977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.888009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.888214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.888249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.888436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.888471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.888667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.888698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.888920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.888956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.889125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.889161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.889375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.889406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.889607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.889642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.889842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.889879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.890087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.890119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.890305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.890340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.890530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.890565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.890767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.890799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.890995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.891031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.891239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.891275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.891498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.891530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.261 [2024-07-13 13:49:07.891723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.261 [2024-07-13 13:49:07.891758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.261 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.891923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.891959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.892186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.892217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.892412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.892447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.892634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.892669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.892883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.892932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.893130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.893179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.893349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.893385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.893566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.893598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.893782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.893817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.894043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.894079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.894304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.894340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.894526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.894558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.894795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.894831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.895021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.895053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.895283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.895318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.895489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.895534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.895715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.895747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.895936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.895972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.896159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.896194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.896363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.896394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.896556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.896591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.896786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.896822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.897056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.897088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.897252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.897287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.897503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.897538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.897738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.897770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.897942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.897977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.262 qpair failed and we were unable to recover it. 00:37:33.262 [2024-07-13 13:49:07.898162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.262 [2024-07-13 13:49:07.898197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.898393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.898425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.898618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.898653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.898815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.898851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.899075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.899107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.899340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.899375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.899541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.899576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.899827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.899863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.900044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.900076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.900281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.900316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.900515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.900546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.900701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.900736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.900932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.900964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.901140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.901172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.901364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.901399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.901585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.901621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.901807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.901839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.902033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.902065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.902252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.902284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.902421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.902453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.902645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.902680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.902879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.902912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.903063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.903095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.903236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.903289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.903504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.903539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.903714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.903746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.903923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.903955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.904137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.904174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.904376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.904407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.904577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.904613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.263 qpair failed and we were unable to recover it. 00:37:33.263 [2024-07-13 13:49:07.904829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.263 [2024-07-13 13:49:07.904860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.905046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.905077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.905306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.905341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.905522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.905557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.905746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.905777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.905999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.906035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.906237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.906272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.906443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.906475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.906651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.906683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.906852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.906900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.907090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.907123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.907344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.907378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.907542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.907577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.907797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.907828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.908012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.908044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.908251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.908286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.908455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.908487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.908685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.908732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.908948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.908983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.909193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.909225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.909450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.909485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.909699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 459540 Killed "${NVMF_APP[@]}" "$@" 00:37:33.264 [2024-07-13 13:49:07.909734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.909930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.909991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 13:49:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:33.264 [2024-07-13 13:49:07.910184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.910219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 13:49:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:33.264 [2024-07-13 13:49:07.910400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 13:49:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:33.264 [2024-07-13 13:49:07.910435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 13:49:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:33.264 [2024-07-13 13:49:07.910626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.910658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 13:49:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.910813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.910845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.911018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.911054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.911249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.911280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.911474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.911509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.264 qpair failed and we were unable to recover it. 00:37:33.264 [2024-07-13 13:49:07.911707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.264 [2024-07-13 13:49:07.911742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.911922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.911954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.912113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.912145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.912317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.912349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.912580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.912612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.912800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.912835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.913010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.913045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.913215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.913247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.913466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.913501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.913690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.913725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.913951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.913984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.914148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.914183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 13:49:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=460103 00:37:33.265 [2024-07-13 13:49:07.914371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.914406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f278 13:49:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 460103 00:37:33.265 0 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 13:49:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:33.265 [2024-07-13 13:49:07.914583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 13:49:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 460103 ']' 00:37:33.265 [2024-07-13 13:49:07.914616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 13:49:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:33.265 [2024-07-13 13:49:07.914829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.914872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 13:49:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 13:49:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:33.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:33.265 [2024-07-13 13:49:07.915075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.915114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 13:49:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 13:49:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:33.265 [2024-07-13 13:49:07.915313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.915345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.915571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.915606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.915801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.915835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.916021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.916053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.916256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.916291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.916503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.916539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.916740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.916772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.916924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.916961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.917102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.917144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.917317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.917349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.917492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.917524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.917663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.917695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.265 [2024-07-13 13:49:07.917942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.265 [2024-07-13 13:49:07.917974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.265 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.918120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.918169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.918385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.918420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.918616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.918649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.918851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.918901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.919128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.919160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.919329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.919360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.919554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.919590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.919780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.919815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.920057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.920090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.920286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.920321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.920477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.920512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.920688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.920720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.920875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.920925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.921098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.921130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.921307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.921339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.921535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.921572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.921791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.921827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.922018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.922050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.922221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.922257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.922475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.922511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.922685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.922717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.922875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.922907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.923112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.923147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.923343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.923376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.923602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.923638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.923788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.923824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.924062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.924096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.924260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.924306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.924499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.924534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.924749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.924781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.924946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.924982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.925208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.925244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.266 qpair failed and we were unable to recover it. 00:37:33.266 [2024-07-13 13:49:07.925452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.266 [2024-07-13 13:49:07.925484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.925675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.925711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.925913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.925954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.926188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.926220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.926384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.926420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.926635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.926670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.926842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.926884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.927064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.927096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.927297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.927332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.927530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.927562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.927726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.927761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.927970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.928002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.928155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.928187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.928376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.928408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.928575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.928610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.928799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.928831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.928986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.929018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.929165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.929197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.929365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.929396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.929557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.929591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.929776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.929812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.930015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.930046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.930193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.930225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.930377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.930426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.930620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.930652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.930814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.930848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.267 qpair failed and we were unable to recover it. 00:37:33.267 [2024-07-13 13:49:07.931013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.267 [2024-07-13 13:49:07.931047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.931216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.931248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.931442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.931476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.931675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.931709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.931922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.931964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.932135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.932167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.932344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.932375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.932544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.932575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.932741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.932774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.932950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.932983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.933154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.933186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.933357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.933389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.933564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.933596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.933736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.933767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.933945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.933978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.934128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.934160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.934352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.934388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.934592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.934624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.934803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.934834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.934994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.935026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.935191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.935223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.935388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.935420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.935594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.935626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.935770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.935803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.935965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.935997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.936174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.936206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.936374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.936406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.936586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.936617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.936758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.936790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.936969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.937001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.937206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.937237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.937411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.937442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.937616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.937648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.937797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.937838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.937992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.938024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.938181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.268 [2024-07-13 13:49:07.938213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.268 qpair failed and we were unable to recover it. 00:37:33.268 [2024-07-13 13:49:07.938397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.938428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.938601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.938632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.938770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.938801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.939012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.939044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.939214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.939246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.939449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.939481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.939633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.939665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.939876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.939909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.940047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.940079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.940236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.940267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.940439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.940470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.940633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.940665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.940835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.940873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.941026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.941057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.941225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.941257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.941434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.941466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.941633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.941665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.941832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.941869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.942059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.942091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.942265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.942297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.942495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.942531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.942678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.942710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.942908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.942941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.943123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.943166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.943386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.943418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.943597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.943629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.943776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.943807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.943962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.943997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.944183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.944215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.944364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.944396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.944574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.944606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.944806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.944837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.945028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.945074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.945282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.945315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.945468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.945500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.945672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.269 [2024-07-13 13:49:07.945704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.269 qpair failed and we were unable to recover it. 00:37:33.269 [2024-07-13 13:49:07.945862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.270 [2024-07-13 13:49:07.945909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.270 qpair failed and we were unable to recover it. 00:37:33.270 [2024-07-13 13:49:07.946111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.270 [2024-07-13 13:49:07.946143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.270 qpair failed and we were unable to recover it. 00:37:33.270 [2024-07-13 13:49:07.946341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.270 [2024-07-13 13:49:07.946373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.270 qpair failed and we were unable to recover it. 00:37:33.270 [2024-07-13 13:49:07.946549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.270 [2024-07-13 13:49:07.946580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.270 qpair failed and we were unable to recover it. 00:37:33.270 [2024-07-13 13:49:07.946742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.270 [2024-07-13 13:49:07.946779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.270 qpair failed and we were unable to recover it. 00:37:33.270 [2024-07-13 13:49:07.946952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.270 [2024-07-13 13:49:07.946986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.270 qpair failed and we were unable to recover it. 00:37:33.270 [2024-07-13 13:49:07.947156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.270 [2024-07-13 13:49:07.947188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.270 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.947368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.947400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.947567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.947599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.947741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.947773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.947949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.947982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.948137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.948169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.948345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.948378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.948520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.948552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.948767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.948800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.948953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.948987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.949157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.949190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.949337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.949370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.949541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.949573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.949751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.949785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.949958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.949991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.950174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.950207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.950363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.950397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.950567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.950598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.950770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.950806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.951001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.951045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.951244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.951277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.951456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.951489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.951643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.951676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.951858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.549 [2024-07-13 13:49:07.951902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.549 qpair failed and we were unable to recover it. 00:37:33.549 [2024-07-13 13:49:07.952075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.952107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.952287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.952319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.952499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.952532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.952706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.952738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.952878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.952911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.953090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.953122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.953287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.953319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.953475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.953508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.953693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.953724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.953879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.953911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.954059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.954091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.954241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.954273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.954452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.954483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.954664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.954697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.954894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.954927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.955110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.955141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.955319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.955351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.955550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.955582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.955720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.955751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.955925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.955958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.956160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.956191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.956340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.956371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.956512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.956543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.956715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.956746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.956922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.956959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.957131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.957163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.957330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.957361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.957563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.957595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.957736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.957768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.957912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.957946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.550 [2024-07-13 13:49:07.958098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.550 [2024-07-13 13:49:07.958130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.550 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.958307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.958339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.958517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.958549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.958724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.958756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.958936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.958973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.959118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.959150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.959328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.959360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.959535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.959566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.959767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.959798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.959969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.960002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.960156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.960188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.960394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.960426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.960602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.960634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.960779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.960810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.960958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.960991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.961158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.961189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.961366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.961398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.961575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.961608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.961784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.961815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.961997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.962029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.962162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.962194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.962395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.962426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.962593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.962625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.962797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.962829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.963021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.963053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.963207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.963239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.963414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.963446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.963592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.963624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.963799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.963831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.963986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.551 [2024-07-13 13:49:07.964029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.551 qpair failed and we were unable to recover it. 00:37:33.551 [2024-07-13 13:49:07.964231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.964263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.964442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.964473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.964654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.964685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.964862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.964901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.965044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.965076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.965272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.965303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.965508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.965540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.965693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.965727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.965906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.965938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.966119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.966151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.966302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.966333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.966512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.966543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.966740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.966771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.966928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.966960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.967139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.967175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.967316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.967347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.967516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.967547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.967755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.967787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.967963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.967997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.968171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.968206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.968405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.968437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.968605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.968636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.968776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.968807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.968975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.969008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.969184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.969216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.969393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.969425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.969592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.969623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.969799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.969830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.969989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.970021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.970173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.552 [2024-07-13 13:49:07.970204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.552 qpair failed and we were unable to recover it. 00:37:33.552 [2024-07-13 13:49:07.970342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.970373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.970546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.970578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.970751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.970783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.970928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.970961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.971118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.971150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.971326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.971357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.971535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.971566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.971740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.971771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.971980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.972013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.972214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.972246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.972400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.972432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.972581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.972613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.972814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.972845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.973001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.973033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.973204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.973235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.973401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.973433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.973578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.973609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.973759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.973790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.973992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.974024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.974192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.974224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.974362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.974394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.974563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.974595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.974740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.974771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.974965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.974997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.975144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.975182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.975351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.975383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.975551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.975583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.975757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.975788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.975988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.976021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.976165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.976197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.976374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.976406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.976605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.553 [2024-07-13 13:49:07.976637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.553 qpair failed and we were unable to recover it. 00:37:33.553 [2024-07-13 13:49:07.976779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.976811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.976986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.977028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.977208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.977240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.977380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.977412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.977547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.977578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.977777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.977808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.977997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.978030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.978208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.978239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.978374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.978405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.978588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.978619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.978821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.978853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.979033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.979064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.979242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.979274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.979472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.979503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.979640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.979672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.979892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.979925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.980141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.980173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.980314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.980345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.980520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.980552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.980732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.980764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.980936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.980968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.981137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.981169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.981365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.981397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.554 [2024-07-13 13:49:07.981566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.554 [2024-07-13 13:49:07.981597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.554 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.981763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.981795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.981977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.982008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.982183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.982214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.982411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.982443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.982622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.982659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.982810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.982842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.983046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.983077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.983222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.983253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.983454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.983490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.983642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.983673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.983816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.983847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.984002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.984034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.984230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.984262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.984408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.984440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.984635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.984667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.984834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.984871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.985053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.985085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.985261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.985292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.985468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.985499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.985668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.985699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.985846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.985883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.986084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.986115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.986292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.986324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.986468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.986500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.986649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.986681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.986889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.986922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.987095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.987127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.987302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.987333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.987497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.987528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.987731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.987762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.555 [2024-07-13 13:49:07.987944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.555 [2024-07-13 13:49:07.987976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.555 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.988147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.988179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.988374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.988406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.988556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.988588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.988764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.988796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.988998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.989031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.989199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.989231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.989400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.989431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.989582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.989614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.989793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.989825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.990010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.990042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.990223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.990265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.990440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.990471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.990625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.990657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.990837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.990885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.991081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.991114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.991260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.991291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.991470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.991501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.991698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.991734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.991879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.991912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.992084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.992116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.992269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.992300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.992471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.992503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.992698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.992729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.992884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.992916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.993060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.993092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.993298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.993330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.993506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.993538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.993709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.993740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.993943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.993976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.994156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.994188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.994362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.994394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.556 [2024-07-13 13:49:07.994568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.556 [2024-07-13 13:49:07.994600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.556 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.994772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.994805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.994961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.994994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.995144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.995175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.995310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.995342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.995522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.995554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.995704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.995736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.995906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.995938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.996091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.996123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.996323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.996355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.996554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.996586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.996730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.996762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.996909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.996941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.997118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.997150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.997316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.997348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.997488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.997520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.997684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.997716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.997891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.997923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.998073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.998106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.998274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.998306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.998463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.998463] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:33.557 [2024-07-13 13:49:07.998495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.998586] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:33.557 [2024-07-13 13:49:07.998671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.998702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.998891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.998922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.999107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.999139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.999318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.999350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.999526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.999562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.999731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.999763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:07.999906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:07.999939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:08.000115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:08.000146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:08.000346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.557 [2024-07-13 13:49:08.000377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.557 qpair failed and we were unable to recover it. 00:37:33.557 [2024-07-13 13:49:08.000577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.000609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.000785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.000817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.000984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.001019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.001192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.001225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.001399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.001431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.001629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.001661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.001860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.001900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.002069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.002101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.002254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.002286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.002466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.002498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.002685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.002717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.002893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.002926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.003074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.003106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.003262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.003305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.003483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.003516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.003689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.003721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.003899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.003932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.004109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.004141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.004313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.004346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.004520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.004552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.004722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.004754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.004912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.004945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.005105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.005136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.005307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.005338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.005511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.005543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.005715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.005747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.005917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.005950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.006150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.006182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.006328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.006360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.006513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.006544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.006713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.006745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.006918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.006952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.007126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.007158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.007331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.558 [2024-07-13 13:49:08.007363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.558 qpair failed and we were unable to recover it. 00:37:33.558 [2024-07-13 13:49:08.007543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.007575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.007742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.007778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.007948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.007981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.008129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.008161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.008305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.008337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.008491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.008523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.008698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.008756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.008961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.008994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.009167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.009199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.009379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.009411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.009591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.009622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.009768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.009801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.009972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.010004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.010182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.010214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.010364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.010396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.010543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.010574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.010745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.010777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.010923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.010955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.011133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.011165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.011346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.011377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.011518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.011550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.011700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.011733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.011938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.011971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.012146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.012178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.012319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.012350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.012516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.012547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.012750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.012782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.012936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.012968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.013148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.013180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.013343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.013374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.013549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.013580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.013726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.013757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.013940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.559 [2024-07-13 13:49:08.013972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.559 qpair failed and we were unable to recover it. 00:37:33.559 [2024-07-13 13:49:08.014114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.014146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.014345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.014377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.014552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.014584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.014751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.014781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.014919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.014951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.015100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.015132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.015308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.015340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.015514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.015547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.015719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.015755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.015934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.015966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.016115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.016162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.016347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.016379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.016530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.016562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.016731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.016763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.016935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.016968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.017148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.017182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.017331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.017363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.017513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.017544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.017701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.017732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.017937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.017974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.018126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.018157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.018324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.018355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.018525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.018556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.560 [2024-07-13 13:49:08.018726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.560 [2024-07-13 13:49:08.018757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.560 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.018956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.018988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.019129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.019160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.019331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.019363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.019503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.019534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.019691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.019723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.019890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.019922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.020098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.020129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.020275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.020306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.020478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.020510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.020694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.020725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.020873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.020905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.021087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.021118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.021257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.021289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.021435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.021467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.021623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.021654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.021833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.021864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.022044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.022076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.022247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.022278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.022450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.022481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.022679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.022710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.022859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.022908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.023045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.023076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.023245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.023277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.023452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.023483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.023659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.023695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.023876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.023908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.024078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.024110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.024276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.024308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.024460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.024491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.024691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.024722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.561 [2024-07-13 13:49:08.024878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-07-13 13:49:08.024911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.561 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.025104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.025136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.025305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.025337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.025501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.025533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.025733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.025765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.025914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.025948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.026149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.026181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.026354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.026385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.026568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.026600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.026774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.026805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.026985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.027018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.027168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.027200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.027352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.027384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.027558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.027590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.027768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.027800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.027950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.027982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.028121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.028153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.028290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.028322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.028497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.028528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.028698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.028730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.028903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.028946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.029132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.029164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.029361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.029392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.029532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.029563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.029729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.029761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.029926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.029959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.030129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.030160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.030301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.030333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.030511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.030543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.030714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.030745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.562 [2024-07-13 13:49:08.030932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-07-13 13:49:08.030965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.562 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.031166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.031198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.031371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.031402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.031579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.031610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.031756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.031792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.031979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.032011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.032162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.032194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.032359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.032391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.032597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.032630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.032776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.032807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.032978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.033011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.033187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.033219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.033388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.033420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.033596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.033628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.033775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.033807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.034010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.034042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.034194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.034231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.034373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.034404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.034558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.034590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.034760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.034792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.034947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.034979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.035123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.035154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.035332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.035363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.035534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.035565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.035733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.035764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.035904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.035941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.036142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.036174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.036362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.036394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.036561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.036592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.036769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.036801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.036959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.036991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.563 qpair failed and we were unable to recover it. 00:37:33.563 [2024-07-13 13:49:08.037194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-07-13 13:49:08.037226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.037432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.037463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.037634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.037666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.037856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.037894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.038069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.038101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.038313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.038345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.038547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.038579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.038733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.038764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.038923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.038955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.039131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.039163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.039369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.039401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.039584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.039616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.039771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.039803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.039950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.039987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.040163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.040194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.040340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.040372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.040548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.040581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.040753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.040785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.040920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.040953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.041135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.041167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.041344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.041376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.041511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.041543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.041709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.041741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.041910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.041953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.042111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.042143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.042314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.042345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.042494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.042525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.042663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.042695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.042840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.564 [2024-07-13 13:49:08.042878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.564 qpair failed and we were unable to recover it. 00:37:33.564 [2024-07-13 13:49:08.043061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.043093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.043269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.043302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.043475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.043507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.043684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.043716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.043855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.043894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.044043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.044075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.044217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.044248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.044419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.044450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.044648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.044680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.044883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.044916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.045061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.045093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.045297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.045349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.045507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.045543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.045697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.045731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.045891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.045927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.046101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.046134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.046319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.046352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.046558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.046590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.046756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.046789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.046970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.047015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.047169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.047204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.047408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.047442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.047627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.047661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.047801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.047834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.048013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.048052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.048254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.048286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.048502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.048536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.048745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.048779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.048957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.048990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.049137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.049169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.049346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.049379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.565 qpair failed and we were unable to recover it. 00:37:33.565 [2024-07-13 13:49:08.049556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.565 [2024-07-13 13:49:08.049587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.049763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.049797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.049976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.050010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.050163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.050197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.050374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.050407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.050583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.050615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.050766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.050799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.050985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.051018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.051190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.051223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.051407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.051440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.051636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.051669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.051838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.051876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.052031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.052063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.052267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.052302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.052504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.052536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.052678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.052709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.052886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.052919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.053073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.053105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.053258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.053290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.053428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.053460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.053640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.053673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.053820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.566 [2024-07-13 13:49:08.053851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.566 qpair failed and we were unable to recover it. 00:37:33.566 [2024-07-13 13:49:08.054059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.054091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.054242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.054274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.054481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.054513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.054652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.054684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.054857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.054895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.055051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.055083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.055258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.055306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.055495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.055530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.055727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.055761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.055911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.055944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.056096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.056129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.056285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.056323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.056527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.056560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.056709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.056742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.056945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.056978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.057127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.057159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.057303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.057336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.057490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.057522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.057694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.057728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.057882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.057916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.058092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.058125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.058301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.058333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.058506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.058540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.058717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.058749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.058927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.058961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.059146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.059178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.059358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.059390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.059550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.059582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.059752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.059784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.567 [2024-07-13 13:49:08.059938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.567 [2024-07-13 13:49:08.059970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.567 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.060152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.060184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.060383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.060415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.060585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.060617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.060813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.060845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.061007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.061039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.061185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.061217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.061376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.061407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.061582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.061613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.061771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.061803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.061976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.062009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.062182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.062214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.062396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.062429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.062629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.062660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.062835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.062873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.063045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.063087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.063266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.063298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.063465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.063496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.063679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.063710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.063888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.063921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.064059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.064092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.064289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.064321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.064494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.064530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.064681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.064712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.064859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.064898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.065076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.065123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.065308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.065343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.065526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.065559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.065707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.065740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.065926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.065960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.568 [2024-07-13 13:49:08.066111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.568 [2024-07-13 13:49:08.066145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.568 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.066317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.066350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.066525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.066558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.066736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.066769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.066927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.066961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.067115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.067146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.067313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.067345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.067517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.067548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.067721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.067753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.067901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.067934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.068087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.068121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.068275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.068310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.068494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.068526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.068704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.068736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.068910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.068944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.069098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.069131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.069328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.069360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.069543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.069576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.069723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.069756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.069962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.069997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.070149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.070180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.070332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.070364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.070564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.070595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.070737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.070769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.070971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.071004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.071169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.071201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.071350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.071382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.071527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.071559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.071738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.071773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.569 qpair failed and we were unable to recover it. 00:37:33.569 [2024-07-13 13:49:08.071946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.569 [2024-07-13 13:49:08.071979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.072132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.072165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.072314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.072347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.072518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.072554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.072733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.072766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.072937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.072971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.073173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.073210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.073387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.073419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.073593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.073624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.073798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.073830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.073989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.074021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.074159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.074191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.074367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.074398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.074594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.074625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.074794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.074825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.075030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.075062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.075206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 EAL: No free 2048 kB hugepages reported on node 1 00:37:33.570 [2024-07-13 13:49:08.075237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.075420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.075452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.075605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.075638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.075778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.075811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.075968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.076001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.076144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.076175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.076355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.076387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.076555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.076587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.076787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.076818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.076978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.077010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.077180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.077212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.077359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.077391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.077562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.077594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.077737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.570 [2024-07-13 13:49:08.077768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.570 qpair failed and we were unable to recover it. 00:37:33.570 [2024-07-13 13:49:08.077931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.077964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.078161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.078193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.078363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.078395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.078547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.078579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.078745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.078776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.078945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.078977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.079144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.079176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.079314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.079346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.079518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.079550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.079687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.079719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.079915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.079947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.080157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.080204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.080362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.080398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.080609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.080647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.080826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.080859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.081082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.081116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.081291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.081323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.081510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.081543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.081720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.081752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.081948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.081980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.082154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.082196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.082371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.082403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.082572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.082604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.082750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.082781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.082974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.083006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.083149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.083181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.083349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.083381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.083582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.083613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.083812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.083844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.084004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.571 [2024-07-13 13:49:08.084035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.571 qpair failed and we were unable to recover it. 00:37:33.571 [2024-07-13 13:49:08.084177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.084209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.084406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.084438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.084635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.084667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.084842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.084889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.085057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.085088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.085259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.085291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.085455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.085489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.085664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.085695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.085847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.085888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.086063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.086095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.086248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.086280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.086457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.086489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.086689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.086720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.086873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.086906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.087082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.087113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.087262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.087294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.087442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.087473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.087618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.087649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.087796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.087827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.088019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.088051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.088200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.088232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.088399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.572 [2024-07-13 13:49:08.088431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.572 qpair failed and we were unable to recover it. 00:37:33.572 [2024-07-13 13:49:08.088570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.088601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.088783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.088819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.088985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.089018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.089162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.089194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.089367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.089398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.089540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.089572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.089741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.089773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.089936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.089969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.090146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.090187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.090349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.090381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.090546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.090578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.090785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.090817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.090973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.091005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.091160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.091192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.091337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.091368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.091570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.091601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.091744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.091776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.091927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.091960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.092114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.092152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.092354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.092386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.092558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.092590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.092763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.092795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.092980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.093029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.093192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.093229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.093447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.093481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.093652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.093697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.093889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.093922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.094076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.094108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.094261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.094295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.573 qpair failed and we were unable to recover it. 00:37:33.573 [2024-07-13 13:49:08.094465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.573 [2024-07-13 13:49:08.094497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.094649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.094681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.094828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.094860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.095063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.095095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.095272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.095304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.095454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.095485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.095642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.095674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.095840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.095882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.096038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.096071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.096271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.096313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.096468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.096499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.096668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.096699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.096842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.096894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.097046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.097077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.097226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.097259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.097410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.097442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.097591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.097622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.097769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.097801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.097956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.097989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.098134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.098172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.098346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.098378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.098529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.098560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.098726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.098759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.098922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.098954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.099105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.099146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.099317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.099349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.099511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.099543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.099731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.099765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.099924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.099956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.100135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.100169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.574 qpair failed and we were unable to recover it. 00:37:33.574 [2024-07-13 13:49:08.100357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.574 [2024-07-13 13:49:08.100389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.100585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.100618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.100820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.100852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.101021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.101054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.101251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.101284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.101457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.101489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.101637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.101669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.101873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.101905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.102057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.102089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.102290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.102339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.102546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.102580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.102760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.102793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.102996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.103031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.103215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.103278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.103450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.103483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.103635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.103668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.103848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.103890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.104045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.104077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.104276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.104308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.104479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.104512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.104669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.104701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.104854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.104903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.105100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.105137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.105314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.105347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.105523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.105555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.105756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.105789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.105940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.105973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.106150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.106183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.106360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.106393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.106547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.106578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.106758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.106790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.106951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.106984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.107138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.107171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.575 [2024-07-13 13:49:08.107341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.575 [2024-07-13 13:49:08.107374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.575 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.107551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.107583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.107780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.107812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.108006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.108039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.108211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.108243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.108440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.108472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.108649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.108681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.108873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.108908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.109086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.109118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.109316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.109348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.109507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.109539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.109722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.109755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.109957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.109990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.110139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.110172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.110342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.110373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.110548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.110581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.110759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.110791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.110933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.110966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.111177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.111209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.111352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.111384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.111571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.111603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.111780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.111812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.111967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.111999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.112175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.112208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.112408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.112440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.112610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.112642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.112814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.112846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.113003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.113035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.113188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.113221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.113400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.113437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.113613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.113644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.113845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.113885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.576 [2024-07-13 13:49:08.114066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.576 [2024-07-13 13:49:08.114098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.576 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.114277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.114309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.114479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.114510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.114687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.114720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.114930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.114963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.115111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.115143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.115288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.115321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.115475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.115508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.115686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.115718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.115890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.115923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.116118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.116165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.116332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.116366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.116517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.116549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.116699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.116730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.116904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.116937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.117084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.117116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.117285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.117317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.117490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.117522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.117694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.117726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.117880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.117913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.118065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.118097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.118252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.118285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.118457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.577 [2024-07-13 13:49:08.118489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.577 qpair failed and we were unable to recover it. 00:37:33.577 [2024-07-13 13:49:08.118638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.118671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.118848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.118888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.119079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.119127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.119321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.119356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.119569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.119615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.119769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.119801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.119973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.120006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.120208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.120241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.120395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.120429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.120603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.120635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.120812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.120844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.121006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.121038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.121183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.121215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.121391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.121422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.121582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.121619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.121789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.121821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.121990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.122023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.122200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.122232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.122421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.122453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.122628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.122660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.122853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.122892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.123037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.123069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.123222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.123256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.123403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.123435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.123612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.123644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.123813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.123845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.124043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.124075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.124218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.124260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.124416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.124447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.124617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.124649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.578 qpair failed and we were unable to recover it. 00:37:33.578 [2024-07-13 13:49:08.124815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.578 [2024-07-13 13:49:08.124846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.125035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.125067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.125236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.125268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.125413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.125445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.125613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.125645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.125795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.125826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.125983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.126016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.126190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.126222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.126385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.126416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.126614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.126645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.126787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.126818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.127011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.127059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.127244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.127279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.127451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.127484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.127669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.127702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.127880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.127914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.128092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.128125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.128293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.128325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.128505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.128537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.128689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.128721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.128917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.128964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.129126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.129162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.129359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.129391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.129534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.129566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.129755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.129788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.129969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.130002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.130201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.130233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.130403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.130435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.130580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.130612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.579 qpair failed and we were unable to recover it. 00:37:33.579 [2024-07-13 13:49:08.130789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.579 [2024-07-13 13:49:08.130824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.131032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.131065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.131234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.131267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.131433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.131465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.131619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.131650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.131812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.131844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.132043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.132090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.132253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.132289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.132451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.132483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.132696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.132729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.132883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.132915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.133074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.133106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.133288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.133320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.133475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.133506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.133700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.133732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.133915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.133947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.134118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.134150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.134298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.134331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.134508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.134539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.134677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.134709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.134888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.134921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.135085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.135117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.135307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.135344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.135496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.135528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.135679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.135710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.135888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.135920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.136057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.136088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.136259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.136290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.136433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.136464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.136616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.136649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.136794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.136825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.580 qpair failed and we were unable to recover it. 00:37:33.580 [2024-07-13 13:49:08.137017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.580 [2024-07-13 13:49:08.137049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.137198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.137230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.137385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.137417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.137573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.137605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.137746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.137778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.137945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.137978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.138122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.138154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.138294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.138326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.138525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.138557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.138699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.138732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.138924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.138972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.139176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.139222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.139419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.139456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.139631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.139664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.139870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.139903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.140105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.140139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.140348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.140380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.140544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.140577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.140724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.140757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.140777] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:33.581 [2024-07-13 13:49:08.140941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.140988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.141188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.141235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.141390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.141427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.141639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.141672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.141848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.141886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.142036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.142068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.142246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.142278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.142457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.142490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.142664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.142698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.142837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.142875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.143057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.143094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.581 [2024-07-13 13:49:08.143274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.581 [2024-07-13 13:49:08.143307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.581 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.143460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.143492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.143690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.143722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.143893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.143925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.144077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.144121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.144323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.144357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.144563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.144595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.144743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.144775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.144954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.144989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.145137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.145170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.145333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.145365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.145542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.145575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.145727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.145759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.145954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.145987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.146143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.146184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.146359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.146392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.146537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.146569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.146743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.146775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.146927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.146960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.147103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.147136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.147312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.147344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.147520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.147552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.147707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.147741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.147901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.147934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.148106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.148140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.148291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.148322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.148500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.148533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.148683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.148716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.148905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.148938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.149113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.149145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.149324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.149357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.149529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.149562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.582 [2024-07-13 13:49:08.149733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.582 [2024-07-13 13:49:08.149765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.582 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.149918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.149950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.150181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.150230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.150453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.150502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.150685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.150720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.150878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.150912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.151090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.151122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.151295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.151327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.151475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.151506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.151671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.151703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.151847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.151886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.152039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.152072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.152308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.152355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.152512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.152546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.152728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.152761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.152918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.152953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.153144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.153178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.153324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.153356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.153501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.153534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.153710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.153743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.153957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.154004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.154164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.154214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.154411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.154450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.154664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.154696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.154884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.583 [2024-07-13 13:49:08.154918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.583 qpair failed and we were unable to recover it. 00:37:33.583 [2024-07-13 13:49:08.155094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.155127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.155280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.155312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.155517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.155549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.155712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.155744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.155947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.155980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.156175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.156222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.156382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.156416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.156602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.156635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.156834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.156872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.157029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.157061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.157234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.157266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.157444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.157476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.157652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.157684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.157890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.157924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.158103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.158135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.158276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.158308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.158483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.158515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.158687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.158720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.158874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.158907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.159087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.159119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.159295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.159328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.159514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.159547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.159715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.159746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.159918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.159950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.160100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.160132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.160330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.160362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.160531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.160562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.160714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.160746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.160887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.160920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.161094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.161126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.161270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.584 [2024-07-13 13:49:08.161302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.584 qpair failed and we were unable to recover it. 00:37:33.584 [2024-07-13 13:49:08.161437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.161469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.161664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.161697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.161840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.161884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.162064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.162097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.162253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.162285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.162424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.162456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.162602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.162638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.162805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.162837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.162988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.163020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.163215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.163246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.163411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.163443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.163581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.163613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.163780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.163811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.163996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.164028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.164197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.164230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.164387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.164419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.164597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.164628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.164773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.164804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.165003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.165036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.165226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.165272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.165485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.165519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.165726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.165760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.165911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.165944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.166117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.166149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.166297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.166330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.166532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.166565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.166743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.166775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.166948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.166980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.167179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.167211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.167390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.167421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.585 [2024-07-13 13:49:08.167566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.585 [2024-07-13 13:49:08.167599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.585 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.167769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.167801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.167951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.167983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.168163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.168194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.168372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.168415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.168556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.168588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.168760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.168792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.168999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.169047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.169226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.169260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.169410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.169443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.169644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.169676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.169858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.169900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.170085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.170118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.170316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.170348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.170514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.170546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.170727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.170760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.170981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.171034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.171211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.171245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.171423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.171456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.171600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.171633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.171830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.171862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.172022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.172054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.172202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.172235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.172407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.172438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.172614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.172648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.172829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.172862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.173019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.173051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.173220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.173252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.173417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.173450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.173652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.173684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.586 qpair failed and we were unable to recover it. 00:37:33.586 [2024-07-13 13:49:08.173861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.586 [2024-07-13 13:49:08.173899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.174067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.174114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.174281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.174315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.174492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.174525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.174696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.174727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.174904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.174937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.175112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.175143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.175297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.175329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.175478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.175510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.175659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.175690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.175846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.175885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.176057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.176089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.176233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.176265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.176449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.176482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.176659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.176690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.176835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.176872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.177020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.177052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.177225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.177257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.177433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.177465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.177629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.177660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.177875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.177907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.178086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.178117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.178285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.178317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.178493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.178524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.178696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.178727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.178920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.178967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.179127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.179167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.179350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.179384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.179584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.179617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.179759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.587 [2024-07-13 13:49:08.179791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.587 qpair failed and we were unable to recover it. 00:37:33.587 [2024-07-13 13:49:08.179938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.179970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.180120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.180153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.180353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.180385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.180558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.180589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.180731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.180763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.180945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.180977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.181149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.181180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.181348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.181379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.181578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.181610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.181787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.181819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.181977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.182011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.182149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.182180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.182390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.182422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.182559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.182591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.182731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.182763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.182942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.182974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.183141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.183187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.183378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.183412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.183563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.183595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.183792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.183825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.183981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.184015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.184191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.184223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.184397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.184429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.184608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.184641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.184797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.184830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.185003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.185050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.185239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.185273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.185423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.185455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.588 [2024-07-13 13:49:08.185635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.588 [2024-07-13 13:49:08.185667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.588 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.185840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.185883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.186038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.186070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.186246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.186277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.186484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.186521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.186719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.186752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.186970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.187005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.187159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.187191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.187371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.187410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.187573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.187605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.187774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.187806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.188034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.188068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.188281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.188313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.188467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.188499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.188678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.188710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.188891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.188925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.189078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.189111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.189302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.189335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.189488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.189520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.189673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.189705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.189860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.189898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.190091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.190138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.190320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.589 [2024-07-13 13:49:08.190354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.589 qpair failed and we were unable to recover it. 00:37:33.589 [2024-07-13 13:49:08.190512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.190544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.190717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.190756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.190917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.190951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.191133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.191176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.191353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.191386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.191536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.191581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.191788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.191820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.192006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.192039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.192224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.192256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.192455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.192487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.192664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.192696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.192870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.192903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.193053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.193086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.193261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.193294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.193472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.193505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.193661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.193693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.193885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.193918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.194062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.194094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.194247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.194279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.194442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.194475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.194644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.194677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.194842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.194887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.195068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.195100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.195347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.195380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.195582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.195614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.195786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.195822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.195992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.196025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.196176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.196210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.196423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.196455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.196636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.590 [2024-07-13 13:49:08.196668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.590 qpair failed and we were unable to recover it. 00:37:33.590 [2024-07-13 13:49:08.196847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.196886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.197027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.197059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.197210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.197242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.197449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.197481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.197655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.197688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.197897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.197930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.198102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.198134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.198284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.198316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.198519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.198551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.198709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.198740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.198950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.198983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.199138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.199171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.199307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.199339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.199490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.199522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.199679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.199711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.199858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.199897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.200101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.200148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.200334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.200379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.200532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.200565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.200717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.200748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.200966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.201000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.201146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.201178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.201330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.201364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.201510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.201542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.201699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.201731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.201913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.201946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.202092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.202124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.202322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.202354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.202508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.202540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.202714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.591 [2024-07-13 13:49:08.202746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.591 qpair failed and we were unable to recover it. 00:37:33.591 [2024-07-13 13:49:08.202893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.202926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.203081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.203114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.203288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.203319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.203475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.203507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.203650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.203682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.203837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.203898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.204044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.204077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.204243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.204290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.204502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.204537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.204685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.204718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.204860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.204898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.205038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.205070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.205226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.205257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.205460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.205491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.205697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.205728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.205905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.205938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.206142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.206174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.206342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.206374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.206548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.206590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.206747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.206778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.206926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.206959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.207125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.207157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.207334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.207365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.207567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.207599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.207750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.207782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.207966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.207998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.208159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.208191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.208363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.208395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.208544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.208576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.208744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.208776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.208950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.208982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.592 [2024-07-13 13:49:08.209149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.592 [2024-07-13 13:49:08.209181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.592 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.209353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.209384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.209527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.209558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.209729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.209761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.209963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.209995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.210150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.210182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.210355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.210387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.210556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.210587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.210728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.210760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.210925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.210973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.211155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.211189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.211365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.211398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.211553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.211587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.211738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.211771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.211974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.212014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.212172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.212205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.212383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.212415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.212568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.212600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.212749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.212780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.212945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.212977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.213118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.213149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.213352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.213383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.213555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.213586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.213752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.213784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.213997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.214044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.214266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.214301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.214475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.214508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.214693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.214725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.214913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.214946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.215133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.215165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.215319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.215352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.215533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.215564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.593 [2024-07-13 13:49:08.215730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.593 [2024-07-13 13:49:08.215762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.593 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.215955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.216003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.216184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.216218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.216369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.216402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.216544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.216576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.216720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.216751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.216952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.216985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.217138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.217171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.217320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.217354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.217561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.217593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.217743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.217774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.217969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.218003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.218149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.218181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.218334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.218366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.218567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.218599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.218780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.218812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.218998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.219031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.219234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.219265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.219417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.219448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.219601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.219633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.219776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.219808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.219984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.220017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.220182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.220229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.220412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.220446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.220602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.220634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.220811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.220845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.221050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.221082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.221236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.221269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.221458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.221491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.221640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.221672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.221849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.221891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.222063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.222095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.222283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.594 [2024-07-13 13:49:08.222315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.594 qpair failed and we were unable to recover it. 00:37:33.594 [2024-07-13 13:49:08.222488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.222521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.222700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.222732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.222911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.222944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.223104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.223136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.223312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.223344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.223517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.223549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.223716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.223748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.223941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.223974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.224150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.224183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.224383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.224414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.224613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.224645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.224792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.224824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.225012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.225045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.225222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.225254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.225422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.225454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.225635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.225667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.225876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.225913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.226056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.226088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.226292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.226324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.226525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.226557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.226726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.226757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.226931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.226965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.227162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.227195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.227349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.227382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.227524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.227567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.227706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.227738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.227905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.227937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.228093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.228125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.595 [2024-07-13 13:49:08.228300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.595 [2024-07-13 13:49:08.228331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.595 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.228497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.228529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.228697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.228730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.228904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.228937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.229108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.229140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.229316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.229349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.229524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.229555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.229731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.229763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.229932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.229964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.230158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.230205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.230386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.230420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.230599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.230631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.230777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.230809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.231032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.231065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.231208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.231240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.231427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.231459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.231641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.231673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.231828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.231861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.232018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.232050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.232224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.232256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.232424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.232456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.232602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.232634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.232818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.232852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.233046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.233078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.233254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.233286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.233424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.233456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.233607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.233639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.233833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.233886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.234069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.234108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.234280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.234314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.234461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.234494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.234662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.234694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.234843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.234885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.235093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.235138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.235291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.235323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.235499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.235531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.235733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.235765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.235905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.235938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.236114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.596 [2024-07-13 13:49:08.236146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.596 qpair failed and we were unable to recover it. 00:37:33.596 [2024-07-13 13:49:08.236320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.236351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.597 qpair failed and we were unable to recover it. 00:37:33.597 [2024-07-13 13:49:08.236521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.236553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.597 qpair failed and we were unable to recover it. 00:37:33.597 [2024-07-13 13:49:08.236696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.236728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.597 qpair failed and we were unable to recover it. 00:37:33.597 [2024-07-13 13:49:08.236931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.236964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.597 qpair failed and we were unable to recover it. 00:37:33.597 [2024-07-13 13:49:08.237106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.237138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.597 qpair failed and we were unable to recover it. 00:37:33.597 [2024-07-13 13:49:08.237316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.237347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.597 qpair failed and we were unable to recover it. 00:37:33.597 [2024-07-13 13:49:08.237520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.237552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.597 qpair failed and we were unable to recover it. 00:37:33.597 [2024-07-13 13:49:08.237748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.237780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.597 qpair failed and we were unable to recover it. 00:37:33.597 [2024-07-13 13:49:08.237936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.237970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.597 qpair failed and we were unable to recover it. 00:37:33.597 [2024-07-13 13:49:08.238113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.238145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.597 qpair failed and we were unable to recover it. 00:37:33.597 [2024-07-13 13:49:08.238318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.238350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.597 qpair failed and we were unable to recover it. 00:37:33.597 [2024-07-13 13:49:08.238520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.238551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.597 qpair failed and we were unable to recover it. 00:37:33.597 [2024-07-13 13:49:08.238706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.238738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.597 qpair failed and we were unable to recover it. 00:37:33.597 [2024-07-13 13:49:08.238926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.238959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.597 qpair failed and we were unable to recover it. 00:37:33.597 [2024-07-13 13:49:08.239106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.239138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.597 qpair failed and we were unable to recover it. 00:37:33.597 [2024-07-13 13:49:08.239337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.239368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:33.597 qpair failed and we were unable to recover it. 00:37:33.597 A controller has encountered a failure and is being reset. 00:37:33.597 [2024-07-13 13:49:08.239710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.597 [2024-07-13 13:49:08.239764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:37:33.597 [2024-07-13 13:49:08.239793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:37:33.597 [2024-07-13 13:49:08.239833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:37:33.597 [2024-07-13 13:49:08.239862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.597 [2024-07-13 13:49:08.239899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.597 [2024-07-13 13:49:08.239923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.597 Unable to reset the controller. 00:37:33.855 [2024-07-13 13:49:08.388731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:33.855 [2024-07-13 13:49:08.388799] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:33.855 [2024-07-13 13:49:08.388850] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:33.855 [2024-07-13 13:49:08.388879] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:33.856 [2024-07-13 13:49:08.388902] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:33.856 [2024-07-13 13:49:08.389007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:37:33.856 [2024-07-13 13:49:08.389058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:37:33.856 [2024-07-13 13:49:08.389103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:37:33.856 [2024-07-13 13:49:08.389113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:37:34.422 13:49:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:34.422 13:49:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:37:34.422 13:49:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:34.422 13:49:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:34.422 13:49:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.422 13:49:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:34.422 13:49:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:34.422 13:49:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.422 13:49:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.422 Malloc0 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.422 [2024-07-13 13:49:09.012195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.422 [2024-07-13 13:49:09.041419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.422 13:49:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 459694 00:37:34.681 Controller properly reset. 00:37:39.940 Initializing NVMe Controllers 00:37:39.940 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:39.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:39.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:39.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:39.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:39.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:39.940 Initialization complete. Launching workers. 00:37:39.940 Starting thread on core 1 00:37:39.940 Starting thread on core 2 00:37:39.940 Starting thread on core 3 00:37:39.940 Starting thread on core 0 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:39.940 00:37:39.940 real 0m11.435s 00:37:39.940 user 0m34.769s 00:37:39.940 sys 0m8.037s 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:39.940 ************************************ 00:37:39.940 END TEST nvmf_target_disconnect_tc2 00:37:39.940 ************************************ 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:39.940 rmmod nvme_tcp 00:37:39.940 rmmod nvme_fabrics 00:37:39.940 rmmod nvme_keyring 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 460103 ']' 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 460103 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 460103 ']' 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 460103 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 460103 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:37:39.940 13:49:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 460103' 00:37:39.941 killing process with pid 460103 00:37:39.941 13:49:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 460103 00:37:39.941 13:49:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 460103 00:37:41.314 13:49:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:41.314 13:49:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:41.314 13:49:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:41.314 13:49:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:41.314 13:49:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:41.314 13:49:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:41.314 13:49:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:41.314 13:49:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:43.216 13:49:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:43.216 00:37:43.216 real 0m17.208s 00:37:43.216 user 1m2.110s 00:37:43.216 sys 0m10.650s 00:37:43.216 13:49:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:43.216 13:49:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:43.216 ************************************ 00:37:43.216 END TEST nvmf_target_disconnect 00:37:43.216 ************************************ 00:37:43.216 13:49:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:43.216 13:49:17 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:37:43.216 13:49:17 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:43.216 13:49:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:43.216 13:49:17 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:37:43.216 00:37:43.216 real 28m57.247s 00:37:43.216 user 78m3.111s 00:37:43.216 sys 6m6.556s 00:37:43.216 13:49:17 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:43.216 13:49:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:43.216 ************************************ 00:37:43.216 END TEST nvmf_tcp 00:37:43.216 ************************************ 00:37:43.216 13:49:17 -- common/autotest_common.sh@1142 -- # return 0 00:37:43.216 13:49:17 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:37:43.216 13:49:17 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:43.216 13:49:17 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:43.216 13:49:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:43.216 13:49:17 -- common/autotest_common.sh@10 -- # set +x 00:37:43.216 ************************************ 00:37:43.216 START TEST spdkcli_nvmf_tcp 00:37:43.216 ************************************ 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:43.216 * Looking for test storage... 00:37:43.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.216 13:49:17 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=461429 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 461429 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 461429 ']' 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:43.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:43.217 13:49:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:43.217 [2024-07-13 13:49:17.911945] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:43.217 [2024-07-13 13:49:17.912095] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid461429 ] 00:37:43.474 EAL: No free 2048 kB hugepages reported on node 1 00:37:43.474 [2024-07-13 13:49:18.037901] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:43.731 [2024-07-13 13:49:18.293510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:43.731 [2024-07-13 13:49:18.293516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:44.294 13:49:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:44.294 13:49:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:37:44.294 13:49:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:44.294 13:49:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:44.294 13:49:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:44.294 13:49:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:44.294 13:49:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:44.294 13:49:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:44.294 13:49:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:44.294 13:49:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:44.294 13:49:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:44.294 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:44.294 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:44.294 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:44.294 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:44.294 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:44.294 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:44.295 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:44.295 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:44.295 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:44.295 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:44.295 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:44.295 ' 00:37:47.572 [2024-07-13 13:49:21.590116] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:48.137 [2024-07-13 13:49:22.823674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:50.664 [2024-07-13 13:49:25.111134] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:52.565 [2024-07-13 13:49:27.077687] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:53.940 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:53.940 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:53.940 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:53.940 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:53.940 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:53.940 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:53.940 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:53.940 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:53.940 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:53.940 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:53.940 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:53.940 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:54.199 13:49:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:54.199 13:49:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:54.199 13:49:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:54.199 13:49:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:54.199 13:49:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:54.199 13:49:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:54.199 13:49:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:54.199 13:49:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:54.457 13:49:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:54.457 13:49:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:54.457 13:49:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:54.457 13:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:54.457 13:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:54.716 13:49:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:54.716 13:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:54.716 13:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:54.716 13:49:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:54.716 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:54.716 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:54.716 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:54.716 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:54.716 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:54.716 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:54.716 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:54.716 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:54.716 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:54.716 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:54.716 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:54.716 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:54.716 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:54.716 ' 00:38:01.283 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:01.283 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:01.283 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:01.283 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:01.283 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:38:01.283 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:38:01.283 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:01.283 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:01.283 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:01.283 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:01.283 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:01.283 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:01.283 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:01.283 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:01.283 13:49:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:01.283 13:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:01.283 13:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:01.283 13:49:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 461429 00:38:01.283 13:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 461429 ']' 00:38:01.283 13:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 461429 00:38:01.283 13:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:38:01.283 13:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:01.283 13:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 461429 00:38:01.283 13:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:01.283 13:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:01.283 13:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 461429' 00:38:01.283 killing process with pid 461429 00:38:01.283 13:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 461429 00:38:01.283 13:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 461429 00:38:01.541 13:49:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:38:01.541 13:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:38:01.541 13:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 461429 ']' 00:38:01.541 13:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 461429 00:38:01.541 13:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 461429 ']' 00:38:01.541 13:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 461429 00:38:01.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (461429) - No such process 00:38:01.541 13:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 461429 is not found' 00:38:01.541 Process with pid 461429 is not found 00:38:01.541 13:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:38:01.541 13:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:38:01.541 13:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:38:01.541 00:38:01.541 real 0m18.464s 00:38:01.541 user 0m38.144s 00:38:01.541 sys 0m0.992s 00:38:01.541 13:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:01.541 13:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:01.541 ************************************ 00:38:01.541 END TEST spdkcli_nvmf_tcp 00:38:01.541 ************************************ 00:38:01.541 13:49:36 -- common/autotest_common.sh@1142 -- # return 0 00:38:01.541 13:49:36 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:01.541 13:49:36 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:01.541 13:49:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:01.541 13:49:36 -- common/autotest_common.sh@10 -- # set +x 00:38:01.541 ************************************ 00:38:01.541 START TEST nvmf_identify_passthru 00:38:01.541 ************************************ 00:38:01.541 13:49:36 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:01.798 * Looking for test storage... 00:38:01.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:01.798 13:49:36 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:01.798 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:01.798 13:49:36 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:01.798 13:49:36 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:01.798 13:49:36 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:01.798 13:49:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.799 13:49:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.799 13:49:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.799 13:49:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:01.799 13:49:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:01.799 13:49:36 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:01.799 13:49:36 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:01.799 13:49:36 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:01.799 13:49:36 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:01.799 13:49:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.799 13:49:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.799 13:49:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.799 13:49:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:01.799 13:49:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.799 13:49:36 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:01.799 13:49:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:01.799 13:49:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:01.799 13:49:36 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:38:01.799 13:49:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:03.697 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:03.697 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:03.697 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:03.698 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:03.698 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:03.698 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:03.956 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:03.956 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:03.956 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:03.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:03.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:38:03.956 00:38:03.956 --- 10.0.0.2 ping statistics --- 00:38:03.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:03.956 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:38:03.956 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:03.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:03.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:38:03.956 00:38:03.956 --- 10.0.0.1 ping statistics --- 00:38:03.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:03.956 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:38:03.956 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:03.956 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:38:03.956 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:03.956 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:03.956 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:03.956 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:03.956 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:03.956 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:03.956 13:49:38 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:03.956 13:49:38 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:38:03.956 13:49:38 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:03.956 13:49:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:03.956 13:49:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:38:03.956 13:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:38:03.956 13:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:38:03.956 13:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:38:03.956 13:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:38:03.956 13:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:38:03.956 13:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:38:03.956 13:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:03.956 13:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:03.956 13:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:38:03.956 13:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:38:03.956 13:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:38:03.956 13:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:38:03.956 13:49:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:38:03.956 13:49:38 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:38:03.956 13:49:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:38:03.956 13:49:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:38:03.956 13:49:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:38:03.956 EAL: No free 2048 kB hugepages reported on node 1 00:38:09.220 13:49:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:38:09.221 13:49:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:38:09.221 13:49:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:38:09.221 13:49:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:38:09.221 EAL: No free 2048 kB hugepages reported on node 1 00:38:13.441 13:49:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:38:13.441 13:49:47 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:38:13.441 13:49:47 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:13.441 13:49:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:13.441 13:49:47 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:38:13.441 13:49:47 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:13.441 13:49:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:13.441 13:49:47 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=466253 00:38:13.441 13:49:47 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:38:13.441 13:49:47 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:13.441 13:49:47 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 466253 00:38:13.441 13:49:47 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 466253 ']' 00:38:13.441 13:49:47 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:13.441 13:49:47 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:13.441 13:49:47 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:13.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:13.441 13:49:47 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:13.441 13:49:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:13.441 [2024-07-13 13:49:47.420134] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:13.441 [2024-07-13 13:49:47.420296] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:13.441 EAL: No free 2048 kB hugepages reported on node 1 00:38:13.441 [2024-07-13 13:49:47.556507] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:13.441 [2024-07-13 13:49:47.810928] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:13.441 [2024-07-13 13:49:47.810997] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:13.441 [2024-07-13 13:49:47.811025] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:13.441 [2024-07-13 13:49:47.811045] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:13.441 [2024-07-13 13:49:47.811066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:13.441 [2024-07-13 13:49:47.811204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:13.441 [2024-07-13 13:49:47.811275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:13.441 [2024-07-13 13:49:47.811368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:13.441 [2024-07-13 13:49:47.811378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:13.699 13:49:48 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:13.699 13:49:48 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:38:13.699 13:49:48 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:38:13.699 13:49:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:13.699 13:49:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:13.699 INFO: Log level set to 20 00:38:13.699 INFO: Requests: 00:38:13.699 { 00:38:13.699 "jsonrpc": "2.0", 00:38:13.699 "method": "nvmf_set_config", 00:38:13.699 "id": 1, 00:38:13.699 "params": { 00:38:13.699 "admin_cmd_passthru": { 00:38:13.699 "identify_ctrlr": true 00:38:13.699 } 00:38:13.699 } 00:38:13.699 } 00:38:13.699 00:38:13.699 INFO: response: 00:38:13.699 { 00:38:13.699 "jsonrpc": "2.0", 00:38:13.699 "id": 1, 00:38:13.699 "result": true 00:38:13.699 } 00:38:13.699 00:38:13.699 13:49:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:13.699 13:49:48 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:38:13.699 13:49:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:13.699 13:49:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:13.699 INFO: Setting log level to 20 00:38:13.699 INFO: Setting log level to 20 00:38:13.699 INFO: Log level set to 20 00:38:13.699 INFO: Log level set to 20 00:38:13.699 INFO: Requests: 00:38:13.699 { 00:38:13.699 "jsonrpc": "2.0", 00:38:13.699 "method": "framework_start_init", 00:38:13.699 "id": 1 00:38:13.699 } 00:38:13.699 00:38:13.699 INFO: Requests: 00:38:13.699 { 00:38:13.699 "jsonrpc": "2.0", 00:38:13.699 "method": "framework_start_init", 00:38:13.699 "id": 1 00:38:13.699 } 00:38:13.699 00:38:13.957 [2024-07-13 13:49:48.667507] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:38:13.957 INFO: response: 00:38:13.957 { 00:38:13.957 "jsonrpc": "2.0", 00:38:13.957 "id": 1, 00:38:13.957 "result": true 00:38:13.957 } 00:38:13.957 00:38:13.957 INFO: response: 00:38:13.957 { 00:38:13.957 "jsonrpc": "2.0", 00:38:13.957 "id": 1, 00:38:13.957 "result": true 00:38:13.957 } 00:38:13.957 00:38:13.957 13:49:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:13.957 13:49:48 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:13.957 13:49:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:13.957 13:49:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:13.957 INFO: Setting log level to 40 00:38:13.957 INFO: Setting log level to 40 00:38:13.957 INFO: Setting log level to 40 00:38:13.957 [2024-07-13 13:49:48.680381] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:13.957 13:49:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:13.957 13:49:48 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:38:13.957 13:49:48 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:13.957 13:49:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.215 13:49:48 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:38:14.215 13:49:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:14.215 13:49:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.497 Nvme0n1 00:38:17.497 13:49:51 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:17.497 13:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:38:17.497 13:49:51 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:17.497 13:49:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.497 13:49:51 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:17.497 13:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:17.497 13:49:51 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:17.497 13:49:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.497 13:49:51 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:17.497 13:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:17.497 13:49:51 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:17.497 13:49:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.497 [2024-07-13 13:49:51.624310] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:17.497 13:49:51 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:17.497 13:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:38:17.497 13:49:51 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:17.497 13:49:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.497 [ 00:38:17.497 { 00:38:17.497 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:17.497 "subtype": "Discovery", 00:38:17.497 "listen_addresses": [], 00:38:17.497 "allow_any_host": true, 00:38:17.497 "hosts": [] 00:38:17.497 }, 00:38:17.497 { 00:38:17.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:17.497 "subtype": "NVMe", 00:38:17.497 "listen_addresses": [ 00:38:17.497 { 00:38:17.497 "trtype": "TCP", 00:38:17.497 "adrfam": "IPv4", 00:38:17.497 "traddr": "10.0.0.2", 00:38:17.497 "trsvcid": "4420" 00:38:17.497 } 00:38:17.497 ], 00:38:17.497 "allow_any_host": true, 00:38:17.497 "hosts": [], 00:38:17.497 "serial_number": "SPDK00000000000001", 00:38:17.497 "model_number": "SPDK bdev Controller", 00:38:17.497 "max_namespaces": 1, 00:38:17.497 "min_cntlid": 1, 00:38:17.497 "max_cntlid": 65519, 00:38:17.497 "namespaces": [ 00:38:17.497 { 00:38:17.497 "nsid": 1, 00:38:17.497 "bdev_name": "Nvme0n1", 00:38:17.497 "name": "Nvme0n1", 00:38:17.497 "nguid": "639E2DF769F44C239167CD2F4ECA6D01", 00:38:17.497 "uuid": "639e2df7-69f4-4c23-9167-cd2f4eca6d01" 00:38:17.497 } 00:38:17.497 ] 00:38:17.497 } 00:38:17.497 ] 00:38:17.497 13:49:51 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:17.497 13:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:17.497 13:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:38:17.497 13:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:38:17.497 EAL: No free 2048 kB hugepages reported on node 1 00:38:17.497 13:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:38:17.497 13:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:17.497 13:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:38:17.497 13:49:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:38:17.497 EAL: No free 2048 kB hugepages reported on node 1 00:38:17.497 13:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:38:17.497 13:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:38:17.497 13:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:38:17.497 13:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:17.497 13:49:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:17.497 13:49:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.756 13:49:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:17.756 13:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:38:17.756 13:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:38:17.756 13:49:52 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:17.756 13:49:52 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:38:17.756 13:49:52 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:17.756 13:49:52 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:38:17.756 13:49:52 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:17.756 13:49:52 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:17.756 rmmod nvme_tcp 00:38:17.756 rmmod nvme_fabrics 00:38:17.756 rmmod nvme_keyring 00:38:17.756 13:49:52 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:17.756 13:49:52 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:38:17.756 13:49:52 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:38:17.756 13:49:52 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 466253 ']' 00:38:17.756 13:49:52 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 466253 00:38:17.756 13:49:52 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 466253 ']' 00:38:17.756 13:49:52 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 466253 00:38:17.756 13:49:52 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:38:17.756 13:49:52 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:17.756 13:49:52 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 466253 00:38:17.756 13:49:52 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:17.756 13:49:52 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:17.756 13:49:52 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 466253' 00:38:17.756 killing process with pid 466253 00:38:17.756 13:49:52 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 466253 00:38:17.756 13:49:52 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 466253 00:38:20.282 13:49:54 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:20.282 13:49:54 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:20.283 13:49:54 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:20.283 13:49:54 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:20.283 13:49:54 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:20.283 13:49:54 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:20.283 13:49:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:20.283 13:49:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.815 13:49:56 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:22.815 00:38:22.815 real 0m20.688s 00:38:22.815 user 0m33.750s 00:38:22.815 sys 0m2.755s 00:38:22.815 13:49:56 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:22.815 13:49:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:22.815 ************************************ 00:38:22.815 END TEST nvmf_identify_passthru 00:38:22.815 ************************************ 00:38:22.815 13:49:56 -- common/autotest_common.sh@1142 -- # return 0 00:38:22.815 13:49:56 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:22.815 13:49:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:22.815 13:49:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:22.815 13:49:56 -- common/autotest_common.sh@10 -- # set +x 00:38:22.815 ************************************ 00:38:22.815 START TEST nvmf_dif 00:38:22.815 ************************************ 00:38:22.815 13:49:57 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:22.815 * Looking for test storage... 00:38:22.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:22.815 13:49:57 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:22.815 13:49:57 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:22.815 13:49:57 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:22.815 13:49:57 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:22.815 13:49:57 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.815 13:49:57 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.815 13:49:57 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.815 13:49:57 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:22.815 13:49:57 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:22.815 13:49:57 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:22.815 13:49:57 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:22.815 13:49:57 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:22.815 13:49:57 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:22.815 13:49:57 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.815 13:49:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:22.815 13:49:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:22.815 13:49:57 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:38:22.815 13:49:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:24.716 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:24.716 13:49:58 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:24.716 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:24.717 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:24.717 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:24.717 13:49:58 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:24.717 13:49:59 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:24.717 13:49:59 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:24.717 13:49:59 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:24.717 13:49:59 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:24.717 13:49:59 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:24.717 13:49:59 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:24.717 13:49:59 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:24.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:24.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:38:24.717 00:38:24.717 --- 10.0.0.2 ping statistics --- 00:38:24.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:24.717 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:38:24.717 13:49:59 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:24.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:24.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:38:24.717 00:38:24.717 --- 10.0.0.1 ping statistics --- 00:38:24.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:24.717 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:38:24.717 13:49:59 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:24.717 13:49:59 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:38:24.717 13:49:59 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:38:24.717 13:49:59 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:25.651 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:38:25.651 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:38:25.651 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:38:25.651 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:38:25.652 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:38:25.652 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:38:25.652 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:38:25.652 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:38:25.652 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:38:25.652 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:38:25.652 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:38:25.652 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:38:25.652 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:38:25.652 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:38:25.652 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:38:25.652 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:38:25.652 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:38:25.652 13:50:00 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:25.652 13:50:00 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:25.652 13:50:00 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:25.652 13:50:00 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:25.652 13:50:00 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:25.652 13:50:00 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:25.652 13:50:00 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:25.652 13:50:00 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:25.652 13:50:00 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:25.652 13:50:00 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:25.652 13:50:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:25.652 13:50:00 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=469710 00:38:25.652 13:50:00 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:25.652 13:50:00 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 469710 00:38:25.652 13:50:00 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 469710 ']' 00:38:25.652 13:50:00 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:25.652 13:50:00 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:25.652 13:50:00 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:25.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:25.652 13:50:00 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:25.652 13:50:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:25.909 [2024-07-13 13:50:00.480800] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:25.909 [2024-07-13 13:50:00.480964] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:25.909 EAL: No free 2048 kB hugepages reported on node 1 00:38:25.909 [2024-07-13 13:50:00.621078] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.167 [2024-07-13 13:50:00.879245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:26.167 [2024-07-13 13:50:00.879324] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:26.167 [2024-07-13 13:50:00.879353] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:26.167 [2024-07-13 13:50:00.879378] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:26.167 [2024-07-13 13:50:00.879399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:26.167 [2024-07-13 13:50:00.879453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.733 13:50:01 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:26.733 13:50:01 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:38:26.733 13:50:01 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:26.733 13:50:01 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:26.733 13:50:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:26.733 13:50:01 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:26.733 13:50:01 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:26.733 13:50:01 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:26.733 13:50:01 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.733 13:50:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:26.733 [2024-07-13 13:50:01.408989] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:26.733 13:50:01 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.733 13:50:01 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:26.733 13:50:01 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:26.733 13:50:01 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:26.733 13:50:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:26.733 ************************************ 00:38:26.733 START TEST fio_dif_1_default 00:38:26.733 ************************************ 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:26.733 bdev_null0 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:26.733 [2024-07-13 13:50:01.469365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:26.733 { 00:38:26.733 "params": { 00:38:26.733 "name": "Nvme$subsystem", 00:38:26.733 "trtype": "$TEST_TRANSPORT", 00:38:26.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:26.733 "adrfam": "ipv4", 00:38:26.733 "trsvcid": "$NVMF_PORT", 00:38:26.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:26.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:26.733 "hdgst": ${hdgst:-false}, 00:38:26.733 "ddgst": ${ddgst:-false} 00:38:26.733 }, 00:38:26.733 "method": "bdev_nvme_attach_controller" 00:38:26.733 } 00:38:26.733 EOF 00:38:26.733 )") 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:26.733 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:38:26.996 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:26.996 13:50:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:38:26.996 13:50:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:38:26.996 13:50:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:26.996 "params": { 00:38:26.996 "name": "Nvme0", 00:38:26.996 "trtype": "tcp", 00:38:26.996 "traddr": "10.0.0.2", 00:38:26.996 "adrfam": "ipv4", 00:38:26.996 "trsvcid": "4420", 00:38:26.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:26.996 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:26.996 "hdgst": false, 00:38:26.996 "ddgst": false 00:38:26.996 }, 00:38:26.996 "method": "bdev_nvme_attach_controller" 00:38:26.996 }' 00:38:26.996 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:26.996 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:26.996 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:38:26.996 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:26.997 13:50:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:27.274 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:27.274 fio-3.35 00:38:27.274 Starting 1 thread 00:38:27.274 EAL: No free 2048 kB hugepages reported on node 1 00:38:39.482 00:38:39.482 filename0: (groupid=0, jobs=1): err= 0: pid=470062: Sat Jul 13 13:50:12 2024 00:38:39.482 read: IOPS=184, BW=738KiB/s (756kB/s)(7392KiB/10017msec) 00:38:39.482 slat (nsec): min=7595, max=74443, avg=14416.13, stdev=7233.36 00:38:39.482 clat (usec): min=909, max=43510, avg=21635.81, stdev=20459.61 00:38:39.482 lat (usec): min=930, max=43552, avg=21650.22, stdev=20458.30 00:38:39.482 clat percentiles (usec): 00:38:39.482 | 1.00th=[ 947], 5.00th=[ 963], 10.00th=[ 979], 20.00th=[ 996], 00:38:39.482 | 30.00th=[ 1012], 40.00th=[ 1029], 50.00th=[41681], 60.00th=[41681], 00:38:39.482 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:38:39.482 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:38:39.482 | 99.99th=[43254] 00:38:39.482 bw ( KiB/s): min= 672, max= 768, per=99.87%, avg=737.60, stdev=33.60, samples=20 00:38:39.482 iops : min= 168, max= 192, avg=184.40, stdev= 8.40, samples=20 00:38:39.482 lat (usec) : 1000=24.51% 00:38:39.482 lat (msec) : 2=25.05%, 50=50.43% 00:38:39.482 cpu : usr=91.14%, sys=8.38%, ctx=18, majf=0, minf=1636 00:38:39.482 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:39.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:39.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:39.482 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:39.482 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:39.482 00:38:39.482 Run status group 0 (all jobs): 00:38:39.482 READ: bw=738KiB/s (756kB/s), 738KiB/s-738KiB/s (756kB/s-756kB/s), io=7392KiB (7569kB), run=10017-10017msec 00:38:39.482 ----------------------------------------------------- 00:38:39.482 Suppressions used: 00:38:39.482 count bytes template 00:38:39.482 1 8 /usr/src/fio/parse.c 00:38:39.482 1 8 libtcmalloc_minimal.so 00:38:39.482 1 904 libcrypto.so 00:38:39.482 ----------------------------------------------------- 00:38:39.482 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.482 00:38:39.482 real 0m12.203s 00:38:39.482 user 0m11.194s 00:38:39.482 sys 0m1.260s 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:39.482 ************************************ 00:38:39.482 END TEST fio_dif_1_default 00:38:39.482 ************************************ 00:38:39.482 13:50:13 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:39.482 13:50:13 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:39.482 13:50:13 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:39.482 13:50:13 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:39.482 13:50:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:39.482 ************************************ 00:38:39.482 START TEST fio_dif_1_multi_subsystems 00:38:39.482 ************************************ 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:39.482 bdev_null0 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:39.482 [2024-07-13 13:50:13.715684] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:39.482 bdev_null1 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:39.482 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:39.482 { 00:38:39.482 "params": { 00:38:39.482 "name": "Nvme$subsystem", 00:38:39.482 "trtype": "$TEST_TRANSPORT", 00:38:39.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:39.482 "adrfam": "ipv4", 00:38:39.482 "trsvcid": "$NVMF_PORT", 00:38:39.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:39.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:39.483 "hdgst": ${hdgst:-false}, 00:38:39.483 "ddgst": ${ddgst:-false} 00:38:39.483 }, 00:38:39.483 "method": "bdev_nvme_attach_controller" 00:38:39.483 } 00:38:39.483 EOF 00:38:39.483 )") 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:39.483 { 00:38:39.483 "params": { 00:38:39.483 "name": "Nvme$subsystem", 00:38:39.483 "trtype": "$TEST_TRANSPORT", 00:38:39.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:39.483 "adrfam": "ipv4", 00:38:39.483 "trsvcid": "$NVMF_PORT", 00:38:39.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:39.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:39.483 "hdgst": ${hdgst:-false}, 00:38:39.483 "ddgst": ${ddgst:-false} 00:38:39.483 }, 00:38:39.483 "method": "bdev_nvme_attach_controller" 00:38:39.483 } 00:38:39.483 EOF 00:38:39.483 )") 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:39.483 "params": { 00:38:39.483 "name": "Nvme0", 00:38:39.483 "trtype": "tcp", 00:38:39.483 "traddr": "10.0.0.2", 00:38:39.483 "adrfam": "ipv4", 00:38:39.483 "trsvcid": "4420", 00:38:39.483 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:39.483 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:39.483 "hdgst": false, 00:38:39.483 "ddgst": false 00:38:39.483 }, 00:38:39.483 "method": "bdev_nvme_attach_controller" 00:38:39.483 },{ 00:38:39.483 "params": { 00:38:39.483 "name": "Nvme1", 00:38:39.483 "trtype": "tcp", 00:38:39.483 "traddr": "10.0.0.2", 00:38:39.483 "adrfam": "ipv4", 00:38:39.483 "trsvcid": "4420", 00:38:39.483 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:39.483 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:39.483 "hdgst": false, 00:38:39.483 "ddgst": false 00:38:39.483 }, 00:38:39.483 "method": "bdev_nvme_attach_controller" 00:38:39.483 }' 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:39.483 13:50:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:39.483 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:39.483 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:39.483 fio-3.35 00:38:39.483 Starting 2 threads 00:38:39.483 EAL: No free 2048 kB hugepages reported on node 1 00:38:51.676 00:38:51.676 filename0: (groupid=0, jobs=1): err= 0: pid=471588: Sat Jul 13 13:50:25 2024 00:38:51.676 read: IOPS=185, BW=740KiB/s (758kB/s)(7424KiB/10032msec) 00:38:51.676 slat (nsec): min=5803, max=88577, avg=14586.55, stdev=6681.93 00:38:51.676 clat (usec): min=909, max=42990, avg=21575.24, stdev=20495.93 00:38:51.676 lat (usec): min=920, max=43012, avg=21589.82, stdev=20494.62 00:38:51.676 clat percentiles (usec): 00:38:51.676 | 1.00th=[ 930], 5.00th=[ 938], 10.00th=[ 955], 20.00th=[ 971], 00:38:51.676 | 30.00th=[ 1004], 40.00th=[ 1037], 50.00th=[41157], 60.00th=[41681], 00:38:51.676 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:51.676 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:38:51.676 | 99.99th=[42730] 00:38:51.676 bw ( KiB/s): min= 672, max= 768, per=50.00%, avg=740.80, stdev=33.28, samples=20 00:38:51.676 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:38:51.676 lat (usec) : 1000=29.09% 00:38:51.676 lat (msec) : 2=20.69%, 50=50.22% 00:38:51.676 cpu : usr=96.04%, sys=3.27%, ctx=26, majf=0, minf=1637 00:38:51.676 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:51.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:51.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:51.676 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:51.676 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:51.676 filename1: (groupid=0, jobs=1): err= 0: pid=471589: Sat Jul 13 13:50:25 2024 00:38:51.676 read: IOPS=185, BW=741KiB/s (758kB/s)(7424KiB/10024msec) 00:38:51.676 slat (nsec): min=5777, max=68444, avg=15930.45, stdev=7904.08 00:38:51.676 clat (usec): min=878, max=48794, avg=21553.03, stdev=20478.54 00:38:51.676 lat (usec): min=890, max=48831, avg=21568.96, stdev=20476.71 00:38:51.676 clat percentiles (usec): 00:38:51.676 | 1.00th=[ 906], 5.00th=[ 922], 10.00th=[ 938], 20.00th=[ 963], 00:38:51.676 | 30.00th=[ 1004], 40.00th=[ 1045], 50.00th=[41157], 60.00th=[41681], 00:38:51.676 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:38:51.676 | 99.00th=[42206], 99.50th=[42206], 99.90th=[49021], 99.95th=[49021], 00:38:51.676 | 99.99th=[49021] 00:38:51.676 bw ( KiB/s): min= 672, max= 768, per=50.00%, avg=740.80, stdev=34.18, samples=20 00:38:51.676 iops : min= 168, max= 192, avg=185.20, stdev= 8.42, samples=20 00:38:51.676 lat (usec) : 1000=29.58% 00:38:51.676 lat (msec) : 2=20.20%, 50=50.22% 00:38:51.676 cpu : usr=96.18%, sys=3.34%, ctx=24, majf=0, minf=1636 00:38:51.676 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:51.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:51.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:51.676 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:51.676 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:51.676 00:38:51.676 Run status group 0 (all jobs): 00:38:51.676 READ: bw=1480KiB/s (1516kB/s), 740KiB/s-741KiB/s (758kB/s-758kB/s), io=14.5MiB (15.2MB), run=10024-10032msec 00:38:51.676 ----------------------------------------------------- 00:38:51.676 Suppressions used: 00:38:51.676 count bytes template 00:38:51.676 2 16 /usr/src/fio/parse.c 00:38:51.676 1 8 libtcmalloc_minimal.so 00:38:51.676 1 904 libcrypto.so 00:38:51.676 ----------------------------------------------------- 00:38:51.676 00:38:51.676 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:51.676 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:51.676 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:51.676 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:51.676 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:51.676 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:51.676 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:51.676 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:51.676 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:51.677 00:38:51.677 real 0m12.454s 00:38:51.677 user 0m21.559s 00:38:51.677 sys 0m1.127s 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:51.677 13:50:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:51.677 ************************************ 00:38:51.677 END TEST fio_dif_1_multi_subsystems 00:38:51.677 ************************************ 00:38:51.677 13:50:26 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:51.677 13:50:26 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:51.677 13:50:26 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:51.677 13:50:26 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:51.677 13:50:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:51.677 ************************************ 00:38:51.677 START TEST fio_dif_rand_params 00:38:51.677 ************************************ 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:51.677 bdev_null0 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:51.677 [2024-07-13 13:50:26.225648] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:51.677 { 00:38:51.677 "params": { 00:38:51.677 "name": "Nvme$subsystem", 00:38:51.677 "trtype": "$TEST_TRANSPORT", 00:38:51.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:51.677 "adrfam": "ipv4", 00:38:51.677 "trsvcid": "$NVMF_PORT", 00:38:51.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:51.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:51.677 "hdgst": ${hdgst:-false}, 00:38:51.677 "ddgst": ${ddgst:-false} 00:38:51.677 }, 00:38:51.677 "method": "bdev_nvme_attach_controller" 00:38:51.677 } 00:38:51.677 EOF 00:38:51.677 )") 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:51.677 "params": { 00:38:51.677 "name": "Nvme0", 00:38:51.677 "trtype": "tcp", 00:38:51.677 "traddr": "10.0.0.2", 00:38:51.677 "adrfam": "ipv4", 00:38:51.677 "trsvcid": "4420", 00:38:51.677 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:51.677 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:51.677 "hdgst": false, 00:38:51.677 "ddgst": false 00:38:51.677 }, 00:38:51.677 "method": "bdev_nvme_attach_controller" 00:38:51.677 }' 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:51.677 13:50:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:51.935 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:51.935 ... 00:38:51.935 fio-3.35 00:38:51.935 Starting 3 threads 00:38:51.935 EAL: No free 2048 kB hugepages reported on node 1 00:38:58.490 00:38:58.490 filename0: (groupid=0, jobs=1): err= 0: pid=473102: Sat Jul 13 13:50:32 2024 00:38:58.490 read: IOPS=186, BW=23.3MiB/s (24.4MB/s)(117MiB/5006msec) 00:38:58.490 slat (nsec): min=5721, max=35421, avg=19619.35, stdev=3097.53 00:38:58.490 clat (usec): min=6096, max=90417, avg=16084.97, stdev=12933.80 00:38:58.490 lat (usec): min=6115, max=90437, avg=16104.59, stdev=12933.86 00:38:58.490 clat percentiles (usec): 00:38:58.490 | 1.00th=[ 6325], 5.00th=[ 6652], 10.00th=[ 7242], 20.00th=[ 9634], 00:38:58.490 | 30.00th=[10290], 40.00th=[11207], 50.00th=[12518], 60.00th=[13566], 00:38:58.490 | 70.00th=[14484], 80.00th=[15664], 90.00th=[49546], 95.00th=[52691], 00:38:58.490 | 99.00th=[55837], 99.50th=[57934], 99.90th=[90702], 99.95th=[90702], 00:38:58.490 | 99.99th=[90702] 00:38:58.490 bw ( KiB/s): min=18432, max=26368, per=33.94%, avg=23782.40, stdev=2465.68, samples=10 00:38:58.490 iops : min= 144, max= 206, avg=185.80, stdev=19.26, samples=10 00:38:58.490 lat (msec) : 10=26.39%, 20=63.09%, 50=1.39%, 100=9.12% 00:38:58.490 cpu : usr=92.89%, sys=6.51%, ctx=23, majf=0, minf=1637 00:38:58.490 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:58.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.490 issued rwts: total=932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:58.490 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:58.490 filename0: (groupid=0, jobs=1): err= 0: pid=473103: Sat Jul 13 13:50:32 2024 00:38:58.490 read: IOPS=180, BW=22.6MiB/s (23.7MB/s)(113MiB/5008msec) 00:38:58.490 slat (nsec): min=8052, max=41641, avg=21265.48, stdev=3661.62 00:38:58.490 clat (usec): min=5967, max=95589, avg=16552.93, stdev=13794.75 00:38:58.490 lat (usec): min=5988, max=95611, avg=16574.20, stdev=13794.80 00:38:58.490 clat percentiles (usec): 00:38:58.490 | 1.00th=[ 6718], 5.00th=[ 7504], 10.00th=[ 8586], 20.00th=[ 9634], 00:38:58.490 | 30.00th=[10290], 40.00th=[11338], 50.00th=[12649], 60.00th=[13435], 00:38:58.490 | 70.00th=[14222], 80.00th=[15008], 90.00th=[50070], 95.00th=[53216], 00:38:58.490 | 99.00th=[55837], 99.50th=[56361], 99.90th=[95945], 99.95th=[95945], 00:38:58.490 | 99.99th=[95945] 00:38:58.490 bw ( KiB/s): min=20736, max=27648, per=33.00%, avg=23121.20, stdev=2236.63, samples=10 00:38:58.490 iops : min= 162, max= 216, avg=180.60, stdev=17.49, samples=10 00:38:58.490 lat (msec) : 10=25.83%, 20=62.91%, 50=1.43%, 100=9.82% 00:38:58.490 cpu : usr=92.11%, sys=6.89%, ctx=179, majf=0, minf=1636 00:38:58.490 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:58.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.490 issued rwts: total=906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:58.490 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:58.490 filename0: (groupid=0, jobs=1): err= 0: pid=473104: Sat Jul 13 13:50:32 2024 00:38:58.490 read: IOPS=183, BW=22.9MiB/s (24.0MB/s)(116MiB/5049msec) 00:38:58.490 slat (nsec): min=6594, max=41320, avg=21444.21, stdev=3723.33 00:38:58.490 clat (usec): min=5664, max=61486, avg=16284.02, stdev=12539.44 00:38:58.490 lat (usec): min=5705, max=61509, avg=16305.46, stdev=12539.68 00:38:58.490 clat percentiles (usec): 00:38:58.490 | 1.00th=[ 6390], 5.00th=[ 6718], 10.00th=[ 7373], 20.00th=[ 9896], 00:38:58.490 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12911], 60.00th=[14091], 00:38:58.490 | 70.00th=[15139], 80.00th=[16581], 90.00th=[20579], 95.00th=[53216], 00:38:58.490 | 99.00th=[56886], 99.50th=[57934], 99.90th=[61604], 99.95th=[61604], 00:38:58.490 | 99.99th=[61604] 00:38:58.490 bw ( KiB/s): min=19968, max=27648, per=33.71%, avg=23623.80, stdev=2506.37, samples=10 00:38:58.490 iops : min= 156, max= 216, avg=184.50, stdev=19.56, samples=10 00:38:58.490 lat (msec) : 10=21.81%, 20=67.71%, 50=2.16%, 100=8.32% 00:38:58.490 cpu : usr=93.44%, sys=5.98%, ctx=10, majf=0, minf=1637 00:38:58.490 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:58.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:58.490 issued rwts: total=926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:58.490 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:58.490 00:38:58.490 Run status group 0 (all jobs): 00:38:58.490 READ: bw=68.4MiB/s (71.8MB/s), 22.6MiB/s-23.3MiB/s (23.7MB/s-24.4MB/s), io=346MiB (362MB), run=5006-5049msec 00:38:59.057 ----------------------------------------------------- 00:38:59.057 Suppressions used: 00:38:59.057 count bytes template 00:38:59.057 5 44 /usr/src/fio/parse.c 00:38:59.057 1 8 libtcmalloc_minimal.so 00:38:59.057 1 904 libcrypto.so 00:38:59.057 ----------------------------------------------------- 00:38:59.057 00:38:59.057 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:59.057 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:59.057 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:59.057 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:59.057 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:59.057 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.058 bdev_null0 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.058 [2024-07-13 13:50:33.643874] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.058 bdev_null1 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.058 bdev_null2 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:59.058 { 00:38:59.058 "params": { 00:38:59.058 "name": "Nvme$subsystem", 00:38:59.058 "trtype": "$TEST_TRANSPORT", 00:38:59.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:59.058 "adrfam": "ipv4", 00:38:59.058 "trsvcid": "$NVMF_PORT", 00:38:59.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:59.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:59.058 "hdgst": ${hdgst:-false}, 00:38:59.058 "ddgst": ${ddgst:-false} 00:38:59.058 }, 00:38:59.058 "method": "bdev_nvme_attach_controller" 00:38:59.058 } 00:38:59.058 EOF 00:38:59.058 )") 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:59.058 { 00:38:59.058 "params": { 00:38:59.058 "name": "Nvme$subsystem", 00:38:59.058 "trtype": "$TEST_TRANSPORT", 00:38:59.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:59.058 "adrfam": "ipv4", 00:38:59.058 "trsvcid": "$NVMF_PORT", 00:38:59.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:59.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:59.058 "hdgst": ${hdgst:-false}, 00:38:59.058 "ddgst": ${ddgst:-false} 00:38:59.058 }, 00:38:59.058 "method": "bdev_nvme_attach_controller" 00:38:59.058 } 00:38:59.058 EOF 00:38:59.058 )") 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:59.058 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:59.059 13:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:59.059 { 00:38:59.059 "params": { 00:38:59.059 "name": "Nvme$subsystem", 00:38:59.059 "trtype": "$TEST_TRANSPORT", 00:38:59.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:59.059 "adrfam": "ipv4", 00:38:59.059 "trsvcid": "$NVMF_PORT", 00:38:59.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:59.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:59.059 "hdgst": ${hdgst:-false}, 00:38:59.059 "ddgst": ${ddgst:-false} 00:38:59.059 }, 00:38:59.059 "method": "bdev_nvme_attach_controller" 00:38:59.059 } 00:38:59.059 EOF 00:38:59.059 )") 00:38:59.059 13:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:59.059 13:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:59.059 13:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:59.059 13:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:59.059 13:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:59.059 "params": { 00:38:59.059 "name": "Nvme0", 00:38:59.059 "trtype": "tcp", 00:38:59.059 "traddr": "10.0.0.2", 00:38:59.059 "adrfam": "ipv4", 00:38:59.059 "trsvcid": "4420", 00:38:59.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:59.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:59.059 "hdgst": false, 00:38:59.059 "ddgst": false 00:38:59.059 }, 00:38:59.059 "method": "bdev_nvme_attach_controller" 00:38:59.059 },{ 00:38:59.059 "params": { 00:38:59.059 "name": "Nvme1", 00:38:59.059 "trtype": "tcp", 00:38:59.059 "traddr": "10.0.0.2", 00:38:59.059 "adrfam": "ipv4", 00:38:59.059 "trsvcid": "4420", 00:38:59.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:59.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:59.059 "hdgst": false, 00:38:59.059 "ddgst": false 00:38:59.059 }, 00:38:59.059 "method": "bdev_nvme_attach_controller" 00:38:59.059 },{ 00:38:59.059 "params": { 00:38:59.059 "name": "Nvme2", 00:38:59.059 "trtype": "tcp", 00:38:59.059 "traddr": "10.0.0.2", 00:38:59.059 "adrfam": "ipv4", 00:38:59.059 "trsvcid": "4420", 00:38:59.059 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:59.059 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:59.059 "hdgst": false, 00:38:59.059 "ddgst": false 00:38:59.059 }, 00:38:59.059 "method": "bdev_nvme_attach_controller" 00:38:59.059 }' 00:38:59.059 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:59.059 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:59.059 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:38:59.059 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:59.059 13:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:59.317 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:59.317 ... 00:38:59.317 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:59.317 ... 00:38:59.317 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:59.317 ... 00:38:59.317 fio-3.35 00:38:59.317 Starting 24 threads 00:38:59.575 EAL: No free 2048 kB hugepages reported on node 1 00:39:11.775 00:39:11.775 filename0: (groupid=0, jobs=1): err= 0: pid=474084: Sat Jul 13 13:50:45 2024 00:39:11.775 read: IOPS=96, BW=388KiB/s (397kB/s)(3904KiB/10069msec) 00:39:11.775 slat (usec): min=12, max=125, avg=24.34, stdev=13.42 00:39:11.775 clat (msec): min=130, max=341, avg=164.81, stdev=24.48 00:39:11.775 lat (msec): min=130, max=341, avg=164.84, stdev=24.48 00:39:11.775 clat percentiles (msec): 00:39:11.775 | 1.00th=[ 131], 5.00th=[ 146], 10.00th=[ 153], 20.00th=[ 155], 00:39:11.775 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 167], 00:39:11.775 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 171], 00:39:11.775 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 342], 00:39:11.775 | 99.99th=[ 342] 00:39:11.775 bw ( KiB/s): min= 256, max= 512, per=4.05%, avg=384.00, stdev=41.53, samples=20 00:39:11.775 iops : min= 64, max= 128, avg=96.00, stdev=10.38, samples=20 00:39:11.775 lat (msec) : 250=98.36%, 500=1.64% 00:39:11.775 cpu : usr=95.99%, sys=2.59%, ctx=71, majf=0, minf=1635 00:39:11.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:11.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.775 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.775 filename0: (groupid=0, jobs=1): err= 0: pid=474085: Sat Jul 13 13:50:45 2024 00:39:11.775 read: IOPS=106, BW=425KiB/s (435kB/s)(4304KiB/10131msec) 00:39:11.775 slat (usec): min=6, max=262, avg=54.70, stdev=15.86 00:39:11.775 clat (msec): min=6, max=244, avg=150.04, stdev=38.44 00:39:11.775 lat (msec): min=6, max=244, avg=150.10, stdev=38.45 00:39:11.775 clat percentiles (msec): 00:39:11.775 | 1.00th=[ 13], 5.00th=[ 23], 10.00th=[ 131], 20.00th=[ 150], 00:39:11.775 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.775 | 70.00th=[ 167], 80.00th=[ 169], 90.00th=[ 169], 95.00th=[ 171], 00:39:11.775 | 99.00th=[ 171], 99.50th=[ 174], 99.90th=[ 245], 99.95th=[ 245], 00:39:11.775 | 99.99th=[ 245] 00:39:11.775 bw ( KiB/s): min= 368, max= 944, per=4.47%, avg=424.00, stdev=128.79, samples=20 00:39:11.775 iops : min= 92, max= 236, avg=106.00, stdev=32.20, samples=20 00:39:11.775 lat (msec) : 10=0.74%, 20=4.09%, 50=1.12%, 100=2.79%, 250=91.26% 00:39:11.775 cpu : usr=95.59%, sys=2.53%, ctx=115, majf=0, minf=1635 00:39:11.775 IO depths : 1=5.5%, 2=11.2%, 4=23.2%, 8=52.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:39:11.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.775 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.775 issued rwts: total=1076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.775 filename0: (groupid=0, jobs=1): err= 0: pid=474086: Sat Jul 13 13:50:45 2024 00:39:11.775 read: IOPS=99, BW=399KiB/s (408kB/s)(4032KiB/10112msec) 00:39:11.775 slat (nsec): min=6632, max=87571, avg=43450.99, stdev=10350.97 00:39:11.775 clat (msec): min=84, max=171, avg=160.11, stdev=13.85 00:39:11.775 lat (msec): min=85, max=171, avg=160.15, stdev=13.86 00:39:11.775 clat percentiles (msec): 00:39:11.775 | 1.00th=[ 86], 5.00th=[ 142], 10.00th=[ 150], 20.00th=[ 153], 00:39:11.775 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.775 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 169], 95.00th=[ 171], 00:39:11.775 | 99.00th=[ 171], 99.50th=[ 171], 99.90th=[ 171], 99.95th=[ 171], 00:39:11.775 | 99.99th=[ 171] 00:39:11.775 bw ( KiB/s): min= 384, max= 512, per=4.18%, avg=396.80, stdev=39.40, samples=20 00:39:11.775 iops : min= 96, max= 128, avg=99.20, stdev= 9.85, samples=20 00:39:11.775 lat (msec) : 100=1.59%, 250=98.41% 00:39:11.775 cpu : usr=97.82%, sys=1.61%, ctx=28, majf=0, minf=1635 00:39:11.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:11.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.775 issued rwts: total=1008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.775 filename0: (groupid=0, jobs=1): err= 0: pid=474087: Sat Jul 13 13:50:45 2024 00:39:11.775 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10018msec) 00:39:11.775 slat (nsec): min=17820, max=96518, avg=56233.05, stdev=10578.17 00:39:11.775 clat (msec): min=101, max=262, avg=163.73, stdev=19.99 00:39:11.775 lat (msec): min=101, max=262, avg=163.79, stdev=19.99 00:39:11.775 clat percentiles (msec): 00:39:11.775 | 1.00th=[ 108], 5.00th=[ 144], 10.00th=[ 146], 20.00th=[ 155], 00:39:11.775 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.775 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 194], 00:39:11.775 | 99.00th=[ 264], 99.50th=[ 264], 99.90th=[ 264], 99.95th=[ 264], 00:39:11.775 | 99.99th=[ 264] 00:39:11.775 bw ( KiB/s): min= 256, max= 496, per=4.04%, avg=384.00, stdev=55.91, samples=20 00:39:11.775 iops : min= 64, max= 124, avg=96.00, stdev=13.98, samples=20 00:39:11.775 lat (msec) : 250=98.36%, 500=1.64% 00:39:11.775 cpu : usr=97.77%, sys=1.66%, ctx=15, majf=0, minf=1635 00:39:11.775 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:39:11.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.775 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.775 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.775 filename0: (groupid=0, jobs=1): err= 0: pid=474088: Sat Jul 13 13:50:45 2024 00:39:11.775 read: IOPS=99, BW=399KiB/s (409kB/s)(4032KiB/10107msec) 00:39:11.775 slat (usec): min=4, max=133, avg=58.54, stdev=12.30 00:39:11.775 clat (msec): min=85, max=180, avg=159.91, stdev=14.17 00:39:11.775 lat (msec): min=85, max=180, avg=159.97, stdev=14.17 00:39:11.775 clat percentiles (msec): 00:39:11.775 | 1.00th=[ 86], 5.00th=[ 142], 10.00th=[ 150], 20.00th=[ 153], 00:39:11.775 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.775 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 169], 95.00th=[ 171], 00:39:11.775 | 99.00th=[ 171], 99.50th=[ 171], 99.90th=[ 182], 99.95th=[ 182], 00:39:11.775 | 99.99th=[ 182] 00:39:11.775 bw ( KiB/s): min= 384, max= 512, per=4.18%, avg=396.80, stdev=39.40, samples=20 00:39:11.775 iops : min= 96, max= 128, avg=99.20, stdev= 9.85, samples=20 00:39:11.775 lat (msec) : 100=1.39%, 250=98.61% 00:39:11.775 cpu : usr=96.41%, sys=2.18%, ctx=118, majf=0, minf=1634 00:39:11.775 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:11.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.775 issued rwts: total=1008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.775 filename0: (groupid=0, jobs=1): err= 0: pid=474089: Sat Jul 13 13:50:45 2024 00:39:11.775 read: IOPS=96, BW=388KiB/s (397kB/s)(3904KiB/10068msec) 00:39:11.775 slat (usec): min=14, max=352, avg=39.20, stdev=17.29 00:39:11.775 clat (msec): min=125, max=300, avg=164.68, stdev=20.48 00:39:11.775 lat (msec): min=125, max=300, avg=164.72, stdev=20.48 00:39:11.775 clat percentiles (msec): 00:39:11.775 | 1.00th=[ 127], 5.00th=[ 146], 10.00th=[ 150], 20.00th=[ 155], 00:39:11.775 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 167], 00:39:11.775 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 171], 00:39:11.775 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:39:11.775 | 99.99th=[ 300] 00:39:11.775 bw ( KiB/s): min= 256, max= 512, per=4.05%, avg=384.00, stdev=41.53, samples=20 00:39:11.775 iops : min= 64, max= 128, avg=96.00, stdev=10.38, samples=20 00:39:11.775 lat (msec) : 250=98.36%, 500=1.64% 00:39:11.775 cpu : usr=97.66%, sys=1.83%, ctx=37, majf=0, minf=1636 00:39:11.775 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:39:11.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.775 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.775 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.775 filename0: (groupid=0, jobs=1): err= 0: pid=474090: Sat Jul 13 13:50:45 2024 00:39:11.775 read: IOPS=96, BW=388KiB/s (397kB/s)(3904KiB/10072msec) 00:39:11.775 slat (nsec): min=14355, max=89785, avg=57694.28, stdev=9372.69 00:39:11.775 clat (msec): min=107, max=304, avg=163.86, stdev=16.46 00:39:11.775 lat (msec): min=107, max=304, avg=163.91, stdev=16.46 00:39:11.775 clat percentiles (msec): 00:39:11.775 | 1.00th=[ 142], 5.00th=[ 146], 10.00th=[ 150], 20.00th=[ 155], 00:39:11.775 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.775 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 171], 00:39:11.775 | 99.00th=[ 266], 99.50th=[ 266], 99.90th=[ 305], 99.95th=[ 305], 00:39:11.775 | 99.99th=[ 305] 00:39:11.775 bw ( KiB/s): min= 256, max= 512, per=4.04%, avg=384.00, stdev=58.73, samples=20 00:39:11.775 iops : min= 64, max= 128, avg=96.00, stdev=14.68, samples=20 00:39:11.775 lat (msec) : 250=98.36%, 500=1.64% 00:39:11.775 cpu : usr=94.81%, sys=3.01%, ctx=295, majf=0, minf=1636 00:39:11.775 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:11.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.776 filename0: (groupid=0, jobs=1): err= 0: pid=474091: Sat Jul 13 13:50:45 2024 00:39:11.776 read: IOPS=102, BW=410KiB/s (420kB/s)(4152KiB/10127msec) 00:39:11.776 slat (nsec): min=4458, max=75129, avg=27123.33, stdev=9750.10 00:39:11.776 clat (msec): min=13, max=246, avg=155.76, stdev=34.20 00:39:11.776 lat (msec): min=13, max=246, avg=155.79, stdev=34.20 00:39:11.776 clat percentiles (msec): 00:39:11.776 | 1.00th=[ 14], 5.00th=[ 86], 10.00th=[ 113], 20.00th=[ 153], 00:39:11.776 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.776 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 174], 00:39:11.776 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 247], 99.95th=[ 247], 00:39:11.776 | 99.99th=[ 247] 00:39:11.776 bw ( KiB/s): min= 368, max= 640, per=4.31%, avg=408.80, stdev=65.96, samples=20 00:39:11.776 iops : min= 92, max= 160, avg=102.20, stdev=16.49, samples=20 00:39:11.776 lat (msec) : 20=1.54%, 50=1.54%, 100=5.97%, 250=90.94% 00:39:11.776 cpu : usr=97.63%, sys=1.96%, ctx=20, majf=0, minf=1637 00:39:11.776 IO depths : 1=3.9%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:39:11.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 issued rwts: total=1038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.776 filename1: (groupid=0, jobs=1): err= 0: pid=474092: Sat Jul 13 13:50:45 2024 00:39:11.776 read: IOPS=99, BW=399KiB/s (408kB/s)(4032KiB/10109msec) 00:39:11.776 slat (usec): min=13, max=107, avg=59.90, stdev=12.55 00:39:11.776 clat (msec): min=85, max=211, avg=159.89, stdev=18.02 00:39:11.776 lat (msec): min=85, max=211, avg=159.95, stdev=18.02 00:39:11.776 clat percentiles (msec): 00:39:11.776 | 1.00th=[ 86], 5.00th=[ 125], 10.00th=[ 146], 20.00th=[ 153], 00:39:11.776 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.776 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 171], 00:39:11.776 | 99.00th=[ 209], 99.50th=[ 211], 99.90th=[ 211], 99.95th=[ 211], 00:39:11.776 | 99.99th=[ 211] 00:39:11.776 bw ( KiB/s): min= 384, max= 512, per=4.18%, avg=396.80, stdev=36.93, samples=20 00:39:11.776 iops : min= 96, max= 128, avg=99.20, stdev= 9.23, samples=20 00:39:11.776 lat (msec) : 100=1.59%, 250=98.41% 00:39:11.776 cpu : usr=95.24%, sys=2.66%, ctx=86, majf=0, minf=1637 00:39:11.776 IO depths : 1=4.3%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:39:11.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 issued rwts: total=1008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.776 filename1: (groupid=0, jobs=1): err= 0: pid=474093: Sat Jul 13 13:50:45 2024 00:39:11.776 read: IOPS=99, BW=399KiB/s (408kB/s)(4032KiB/10110msec) 00:39:11.776 slat (nsec): min=11313, max=94697, avg=52785.89, stdev=11682.97 00:39:11.776 clat (msec): min=85, max=213, avg=160.01, stdev=14.54 00:39:11.776 lat (msec): min=85, max=213, avg=160.06, stdev=14.54 00:39:11.776 clat percentiles (msec): 00:39:11.776 | 1.00th=[ 86], 5.00th=[ 142], 10.00th=[ 150], 20.00th=[ 153], 00:39:11.776 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.776 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 171], 00:39:11.776 | 99.00th=[ 171], 99.50th=[ 188], 99.90th=[ 213], 99.95th=[ 213], 00:39:11.776 | 99.99th=[ 213] 00:39:11.776 bw ( KiB/s): min= 384, max= 512, per=4.18%, avg=396.80, stdev=36.93, samples=20 00:39:11.776 iops : min= 96, max= 128, avg=99.20, stdev= 9.23, samples=20 00:39:11.776 lat (msec) : 100=1.39%, 250=98.61% 00:39:11.776 cpu : usr=97.60%, sys=1.76%, ctx=34, majf=0, minf=1637 00:39:11.776 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:39:11.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 issued rwts: total=1008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.776 filename1: (groupid=0, jobs=1): err= 0: pid=474094: Sat Jul 13 13:50:45 2024 00:39:11.776 read: IOPS=98, BW=393KiB/s (403kB/s)(3968KiB/10084msec) 00:39:11.776 slat (usec): min=6, max=148, avg=53.60, stdev= 9.87 00:39:11.776 clat (msec): min=129, max=208, avg=162.15, stdev=10.76 00:39:11.776 lat (msec): min=129, max=208, avg=162.20, stdev=10.76 00:39:11.776 clat percentiles (msec): 00:39:11.776 | 1.00th=[ 131], 5.00th=[ 146], 10.00th=[ 148], 20.00th=[ 155], 00:39:11.776 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.776 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 171], 00:39:11.776 | 99.00th=[ 209], 99.50th=[ 209], 99.90th=[ 209], 99.95th=[ 209], 00:39:11.776 | 99.99th=[ 209] 00:39:11.776 bw ( KiB/s): min= 256, max= 512, per=4.12%, avg=390.30, stdev=50.45, samples=20 00:39:11.776 iops : min= 64, max= 128, avg=97.55, stdev=12.62, samples=20 00:39:11.776 lat (msec) : 250=100.00% 00:39:11.776 cpu : usr=95.20%, sys=2.73%, ctx=101, majf=0, minf=1634 00:39:11.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:11.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.776 filename1: (groupid=0, jobs=1): err= 0: pid=474095: Sat Jul 13 13:50:45 2024 00:39:11.776 read: IOPS=96, BW=388KiB/s (397kB/s)(3904KiB/10066msec) 00:39:11.776 slat (usec): min=15, max=103, avg=56.54, stdev=11.86 00:39:11.776 clat (msec): min=141, max=299, avg=164.52, stdev=19.06 00:39:11.776 lat (msec): min=141, max=299, avg=164.57, stdev=19.05 00:39:11.776 clat percentiles (msec): 00:39:11.776 | 1.00th=[ 142], 5.00th=[ 150], 10.00th=[ 153], 20.00th=[ 155], 00:39:11.776 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 167], 00:39:11.776 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 171], 00:39:11.776 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:39:11.776 | 99.99th=[ 300] 00:39:11.776 bw ( KiB/s): min= 256, max= 512, per=4.05%, avg=384.00, stdev=41.53, samples=20 00:39:11.776 iops : min= 64, max= 128, avg=96.00, stdev=10.38, samples=20 00:39:11.776 lat (msec) : 250=98.36%, 500=1.64% 00:39:11.776 cpu : usr=95.04%, sys=2.96%, ctx=224, majf=0, minf=1636 00:39:11.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:11.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.776 filename1: (groupid=0, jobs=1): err= 0: pid=474096: Sat Jul 13 13:50:45 2024 00:39:11.776 read: IOPS=96, BW=388KiB/s (397kB/s)(3904KiB/10073msec) 00:39:11.776 slat (nsec): min=19677, max=86402, avg=53603.80, stdev=9649.43 00:39:11.776 clat (msec): min=129, max=341, avg=164.58, stdev=24.47 00:39:11.776 lat (msec): min=129, max=341, avg=164.63, stdev=24.47 00:39:11.776 clat percentiles (msec): 00:39:11.776 | 1.00th=[ 131], 5.00th=[ 146], 10.00th=[ 150], 20.00th=[ 155], 00:39:11.776 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 167], 00:39:11.776 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 171], 00:39:11.776 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 342], 00:39:11.776 | 99.99th=[ 342] 00:39:11.776 bw ( KiB/s): min= 256, max= 512, per=4.05%, avg=384.00, stdev=41.53, samples=20 00:39:11.776 iops : min= 64, max= 128, avg=96.00, stdev=10.38, samples=20 00:39:11.776 lat (msec) : 250=98.36%, 500=1.64% 00:39:11.776 cpu : usr=96.50%, sys=2.14%, ctx=131, majf=0, minf=1636 00:39:11.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:11.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.776 filename1: (groupid=0, jobs=1): err= 0: pid=474097: Sat Jul 13 13:50:45 2024 00:39:11.776 read: IOPS=96, BW=387KiB/s (397kB/s)(3904KiB/10075msec) 00:39:11.776 slat (nsec): min=12371, max=92544, avg=56417.00, stdev=9859.58 00:39:11.776 clat (msec): min=113, max=268, avg=163.98, stdev=16.73 00:39:11.776 lat (msec): min=113, max=268, avg=164.03, stdev=16.72 00:39:11.776 clat percentiles (msec): 00:39:11.776 | 1.00th=[ 142], 5.00th=[ 146], 10.00th=[ 148], 20.00th=[ 155], 00:39:11.776 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.776 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 171], 00:39:11.776 | 99.00th=[ 268], 99.50th=[ 268], 99.90th=[ 271], 99.95th=[ 271], 00:39:11.776 | 99.99th=[ 271] 00:39:11.776 bw ( KiB/s): min= 256, max= 496, per=4.04%, avg=384.00, stdev=53.70, samples=20 00:39:11.776 iops : min= 64, max= 124, avg=96.00, stdev=13.42, samples=20 00:39:11.776 lat (msec) : 250=98.36%, 500=1.64% 00:39:11.776 cpu : usr=97.04%, sys=1.93%, ctx=230, majf=0, minf=1636 00:39:11.776 IO depths : 1=1.7%, 2=8.0%, 4=25.0%, 8=54.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:39:11.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.776 filename1: (groupid=0, jobs=1): err= 0: pid=474098: Sat Jul 13 13:50:45 2024 00:39:11.776 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10020msec) 00:39:11.776 slat (nsec): min=11796, max=86417, avg=30843.18, stdev=20027.12 00:39:11.776 clat (msec): min=101, max=265, avg=163.99, stdev=21.87 00:39:11.776 lat (msec): min=101, max=265, avg=164.02, stdev=21.86 00:39:11.776 clat percentiles (msec): 00:39:11.776 | 1.00th=[ 108], 5.00th=[ 142], 10.00th=[ 146], 20.00th=[ 155], 00:39:11.776 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.776 | 70.00th=[ 169], 80.00th=[ 171], 90.00th=[ 171], 95.00th=[ 203], 00:39:11.776 | 99.00th=[ 266], 99.50th=[ 266], 99.90th=[ 266], 99.95th=[ 266], 00:39:11.776 | 99.99th=[ 266] 00:39:11.776 bw ( KiB/s): min= 256, max= 496, per=4.04%, avg=384.00, stdev=55.43, samples=20 00:39:11.776 iops : min= 64, max= 124, avg=96.00, stdev=13.86, samples=20 00:39:11.776 lat (msec) : 250=98.36%, 500=1.64% 00:39:11.776 cpu : usr=97.30%, sys=1.95%, ctx=76, majf=0, minf=1634 00:39:11.776 IO depths : 1=3.9%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:39:11.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.776 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.777 filename1: (groupid=0, jobs=1): err= 0: pid=474099: Sat Jul 13 13:50:45 2024 00:39:11.777 read: IOPS=101, BW=407KiB/s (417kB/s)(4096KiB/10066msec) 00:39:11.777 slat (usec): min=9, max=312, avg=54.37, stdev=17.26 00:39:11.777 clat (msec): min=34, max=220, avg=156.81, stdev=27.92 00:39:11.777 lat (msec): min=34, max=220, avg=156.87, stdev=27.92 00:39:11.777 clat percentiles (msec): 00:39:11.777 | 1.00th=[ 35], 5.00th=[ 104], 10.00th=[ 133], 20.00th=[ 150], 00:39:11.777 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.777 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 171], 00:39:11.777 | 99.00th=[ 220], 99.50th=[ 220], 99.90th=[ 222], 99.95th=[ 222], 00:39:11.777 | 99.99th=[ 222] 00:39:11.777 bw ( KiB/s): min= 384, max= 640, per=4.25%, avg=403.20, stdev=61.11, samples=20 00:39:11.777 iops : min= 96, max= 160, avg=100.80, stdev=15.28, samples=20 00:39:11.777 lat (msec) : 50=1.56%, 100=3.32%, 250=95.12% 00:39:11.777 cpu : usr=95.94%, sys=2.55%, ctx=104, majf=0, minf=1635 00:39:11.777 IO depths : 1=3.9%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:39:11.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.777 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.777 issued rwts: total=1024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.777 filename2: (groupid=0, jobs=1): err= 0: pid=474100: Sat Jul 13 13:50:45 2024 00:39:11.777 read: IOPS=104, BW=417KiB/s (427kB/s)(4224KiB/10129msec) 00:39:11.777 slat (usec): min=6, max=125, avg=57.27, stdev=16.33 00:39:11.777 clat (msec): min=14, max=245, avg=152.98, stdev=36.62 00:39:11.777 lat (msec): min=14, max=246, avg=153.04, stdev=36.63 00:39:11.777 clat percentiles (msec): 00:39:11.777 | 1.00th=[ 15], 5.00th=[ 86], 10.00th=[ 112], 20.00th=[ 148], 00:39:11.777 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.777 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 171], 00:39:11.777 | 99.00th=[ 239], 99.50th=[ 241], 99.90th=[ 247], 99.95th=[ 247], 00:39:11.777 | 99.99th=[ 247] 00:39:11.777 bw ( KiB/s): min= 368, max= 768, per=4.38%, avg=416.00, stdev=91.99, samples=20 00:39:11.777 iops : min= 92, max= 192, avg=104.00, stdev=23.00, samples=20 00:39:11.777 lat (msec) : 20=3.03%, 50=1.52%, 100=4.36%, 250=91.10% 00:39:11.777 cpu : usr=97.17%, sys=2.03%, ctx=70, majf=0, minf=1636 00:39:11.777 IO depths : 1=4.3%, 2=10.4%, 4=24.6%, 8=52.5%, 16=8.2%, 32=0.0%, >=64=0.0% 00:39:11.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.777 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.777 issued rwts: total=1056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.777 filename2: (groupid=0, jobs=1): err= 0: pid=474101: Sat Jul 13 13:50:45 2024 00:39:11.777 read: IOPS=98, BW=393KiB/s (402kB/s)(3968KiB/10099msec) 00:39:11.777 slat (nsec): min=4733, max=97984, avg=27078.33, stdev=8561.15 00:39:11.777 clat (msec): min=105, max=213, avg=162.65, stdev=14.84 00:39:11.777 lat (msec): min=105, max=213, avg=162.68, stdev=14.84 00:39:11.777 clat percentiles (msec): 00:39:11.777 | 1.00th=[ 115], 5.00th=[ 142], 10.00th=[ 146], 20.00th=[ 153], 00:39:11.777 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:39:11.777 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 186], 00:39:11.777 | 99.00th=[ 213], 99.50th=[ 213], 99.90th=[ 213], 99.95th=[ 213], 00:39:11.777 | 99.99th=[ 213] 00:39:11.777 bw ( KiB/s): min= 368, max= 512, per=4.12%, avg=390.40, stdev=29.09, samples=20 00:39:11.777 iops : min= 92, max= 128, avg=97.60, stdev= 7.27, samples=20 00:39:11.777 lat (msec) : 250=100.00% 00:39:11.777 cpu : usr=96.49%, sys=2.33%, ctx=109, majf=0, minf=1636 00:39:11.777 IO depths : 1=4.1%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:39:11.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.777 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.777 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.777 filename2: (groupid=0, jobs=1): err= 0: pid=474102: Sat Jul 13 13:50:45 2024 00:39:11.777 read: IOPS=96, BW=388KiB/s (397kB/s)(3904KiB/10069msec) 00:39:11.777 slat (nsec): min=16290, max=86867, avg=54844.03, stdev=8791.71 00:39:11.777 clat (msec): min=101, max=301, avg=163.90, stdev=17.77 00:39:11.777 lat (msec): min=101, max=301, avg=163.96, stdev=17.77 00:39:11.777 clat percentiles (msec): 00:39:11.777 | 1.00th=[ 116], 5.00th=[ 146], 10.00th=[ 148], 20.00th=[ 155], 00:39:11.777 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.777 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 171], 00:39:11.777 | 99.00th=[ 264], 99.50th=[ 264], 99.90th=[ 300], 99.95th=[ 300], 00:39:11.777 | 99.99th=[ 300] 00:39:11.777 bw ( KiB/s): min= 256, max= 512, per=4.04%, avg=384.00, stdev=57.10, samples=20 00:39:11.777 iops : min= 64, max= 128, avg=96.00, stdev=14.28, samples=20 00:39:11.777 lat (msec) : 250=98.36%, 500=1.64% 00:39:11.777 cpu : usr=97.52%, sys=1.69%, ctx=50, majf=0, minf=1634 00:39:11.777 IO depths : 1=1.3%, 2=7.6%, 4=25.0%, 8=54.9%, 16=11.2%, 32=0.0%, >=64=0.0% 00:39:11.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.777 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.777 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.777 filename2: (groupid=0, jobs=1): err= 0: pid=474103: Sat Jul 13 13:50:45 2024 00:39:11.777 read: IOPS=98, BW=393KiB/s (402kB/s)(3968KiB/10101msec) 00:39:11.777 slat (nsec): min=12516, max=94814, avg=32647.73, stdev=10870.59 00:39:11.777 clat (msec): min=105, max=229, avg=162.64, stdev=14.79 00:39:11.777 lat (msec): min=105, max=229, avg=162.67, stdev=14.78 00:39:11.777 clat percentiles (msec): 00:39:11.777 | 1.00th=[ 115], 5.00th=[ 142], 10.00th=[ 146], 20.00th=[ 153], 00:39:11.777 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:39:11.777 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 188], 00:39:11.777 | 99.00th=[ 211], 99.50th=[ 213], 99.90th=[ 230], 99.95th=[ 230], 00:39:11.777 | 99.99th=[ 230] 00:39:11.777 bw ( KiB/s): min= 368, max= 496, per=4.12%, avg=390.40, stdev=25.64, samples=20 00:39:11.777 iops : min= 92, max= 124, avg=97.60, stdev= 6.41, samples=20 00:39:11.777 lat (msec) : 250=100.00% 00:39:11.777 cpu : usr=97.43%, sys=2.12%, ctx=29, majf=0, minf=1634 00:39:11.777 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:39:11.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.777 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.777 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.777 filename2: (groupid=0, jobs=1): err= 0: pid=474104: Sat Jul 13 13:50:45 2024 00:39:11.777 read: IOPS=96, BW=388KiB/s (397kB/s)(3904KiB/10068msec) 00:39:11.777 slat (usec): min=10, max=147, avg=35.95, stdev=12.94 00:39:11.777 clat (msec): min=105, max=344, avg=164.70, stdev=20.14 00:39:11.777 lat (msec): min=105, max=344, avg=164.73, stdev=20.14 00:39:11.777 clat percentiles (msec): 00:39:11.777 | 1.00th=[ 142], 5.00th=[ 146], 10.00th=[ 153], 20.00th=[ 155], 00:39:11.777 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 167], 00:39:11.777 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 171], 00:39:11.777 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 347], 99.95th=[ 347], 00:39:11.777 | 99.99th=[ 347] 00:39:11.777 bw ( KiB/s): min= 256, max= 512, per=4.05%, avg=384.00, stdev=41.53, samples=20 00:39:11.777 iops : min= 64, max= 128, avg=96.00, stdev=10.38, samples=20 00:39:11.777 lat (msec) : 250=98.36%, 500=1.64% 00:39:11.777 cpu : usr=95.24%, sys=3.18%, ctx=172, majf=0, minf=1634 00:39:11.777 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:11.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.777 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.777 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.777 filename2: (groupid=0, jobs=1): err= 0: pid=474105: Sat Jul 13 13:50:45 2024 00:39:11.777 read: IOPS=99, BW=399KiB/s (408kB/s)(4032KiB/10114msec) 00:39:11.777 slat (usec): min=4, max=331, avg=51.26, stdev=16.25 00:39:11.777 clat (msec): min=85, max=211, avg=160.09, stdev=14.64 00:39:11.777 lat (msec): min=85, max=211, avg=160.14, stdev=14.64 00:39:11.777 clat percentiles (msec): 00:39:11.777 | 1.00th=[ 86], 5.00th=[ 142], 10.00th=[ 146], 20.00th=[ 153], 00:39:11.777 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.777 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 171], 00:39:11.777 | 99.00th=[ 171], 99.50th=[ 205], 99.90th=[ 211], 99.95th=[ 211], 00:39:11.777 | 99.99th=[ 211] 00:39:11.777 bw ( KiB/s): min= 384, max= 512, per=4.18%, avg=396.80, stdev=39.40, samples=20 00:39:11.777 iops : min= 96, max= 128, avg=99.20, stdev= 9.85, samples=20 00:39:11.777 lat (msec) : 100=1.59%, 250=98.41% 00:39:11.777 cpu : usr=96.48%, sys=2.29%, ctx=39, majf=0, minf=1635 00:39:11.777 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:39:11.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.777 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.777 issued rwts: total=1008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.777 filename2: (groupid=0, jobs=1): err= 0: pid=474106: Sat Jul 13 13:50:45 2024 00:39:11.777 read: IOPS=102, BW=411KiB/s (421kB/s)(4160KiB/10125msec) 00:39:11.777 slat (nsec): min=4328, max=79990, avg=25663.85, stdev=10824.64 00:39:11.777 clat (msec): min=36, max=246, avg=155.55, stdev=28.38 00:39:11.777 lat (msec): min=36, max=246, avg=155.57, stdev=28.39 00:39:11.777 clat percentiles (msec): 00:39:11.777 | 1.00th=[ 37], 5.00th=[ 88], 10.00th=[ 133], 20.00th=[ 153], 00:39:11.777 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.777 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 171], 95.00th=[ 171], 00:39:11.777 | 99.00th=[ 239], 99.50th=[ 243], 99.90th=[ 247], 99.95th=[ 247], 00:39:11.777 | 99.99th=[ 247] 00:39:11.777 bw ( KiB/s): min= 368, max= 640, per=4.32%, avg=409.60, stdev=67.36, samples=20 00:39:11.777 iops : min= 92, max= 160, avg=102.40, stdev=16.84, samples=20 00:39:11.777 lat (msec) : 50=1.54%, 100=6.15%, 250=92.31% 00:39:11.777 cpu : usr=97.98%, sys=1.60%, ctx=17, majf=0, minf=1637 00:39:11.777 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:39:11.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.777 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.777 issued rwts: total=1040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.777 filename2: (groupid=0, jobs=1): err= 0: pid=474107: Sat Jul 13 13:50:45 2024 00:39:11.777 read: IOPS=98, BW=393KiB/s (402kB/s)(3968KiB/10101msec) 00:39:11.777 slat (usec): min=5, max=156, avg=57.99, stdev=15.67 00:39:11.777 clat (msec): min=126, max=212, avg=162.37, stdev= 9.04 00:39:11.778 lat (msec): min=126, max=212, avg=162.43, stdev= 9.04 00:39:11.778 clat percentiles (msec): 00:39:11.778 | 1.00th=[ 142], 5.00th=[ 146], 10.00th=[ 150], 20.00th=[ 155], 00:39:11.778 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:39:11.778 | 70.00th=[ 169], 80.00th=[ 169], 90.00th=[ 169], 95.00th=[ 171], 00:39:11.778 | 99.00th=[ 188], 99.50th=[ 188], 99.90th=[ 213], 99.95th=[ 213], 00:39:11.778 | 99.99th=[ 213] 00:39:11.778 bw ( KiB/s): min= 384, max= 512, per=4.12%, avg=390.40, stdev=28.62, samples=20 00:39:11.778 iops : min= 96, max= 128, avg=97.60, stdev= 7.16, samples=20 00:39:11.778 lat (msec) : 250=100.00% 00:39:11.778 cpu : usr=93.66%, sys=3.29%, ctx=105, majf=0, minf=1636 00:39:11.778 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:11.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.778 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.778 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:11.778 00:39:11.778 Run status group 0 (all jobs): 00:39:11.778 READ: bw=9477KiB/s (9704kB/s), 387KiB/s-425KiB/s (397kB/s-435kB/s), io=93.8MiB (98.3MB), run=10018-10131msec 00:39:11.778 ----------------------------------------------------- 00:39:11.778 Suppressions used: 00:39:11.778 count bytes template 00:39:11.778 45 402 /usr/src/fio/parse.c 00:39:11.778 1 8 libtcmalloc_minimal.so 00:39:11.778 1 904 libcrypto.so 00:39:11.778 ----------------------------------------------------- 00:39:11.778 00:39:11.778 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:39:11.778 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:11.778 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:11.778 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:11.778 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:11.778 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:11.778 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.778 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:11.778 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.778 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:11.778 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.778 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.037 bdev_null0 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.037 [2024-07-13 13:50:46.585166] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.037 bdev_null1 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:12.037 { 00:39:12.037 "params": { 00:39:12.037 "name": "Nvme$subsystem", 00:39:12.037 "trtype": "$TEST_TRANSPORT", 00:39:12.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:12.037 "adrfam": "ipv4", 00:39:12.037 "trsvcid": "$NVMF_PORT", 00:39:12.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:12.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:12.037 "hdgst": ${hdgst:-false}, 00:39:12.037 "ddgst": ${ddgst:-false} 00:39:12.037 }, 00:39:12.037 "method": "bdev_nvme_attach_controller" 00:39:12.037 } 00:39:12.037 EOF 00:39:12.037 )") 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:12.037 { 00:39:12.037 "params": { 00:39:12.037 "name": "Nvme$subsystem", 00:39:12.037 "trtype": "$TEST_TRANSPORT", 00:39:12.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:12.037 "adrfam": "ipv4", 00:39:12.037 "trsvcid": "$NVMF_PORT", 00:39:12.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:12.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:12.037 "hdgst": ${hdgst:-false}, 00:39:12.037 "ddgst": ${ddgst:-false} 00:39:12.037 }, 00:39:12.037 "method": "bdev_nvme_attach_controller" 00:39:12.037 } 00:39:12.037 EOF 00:39:12.037 )") 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:12.037 "params": { 00:39:12.037 "name": "Nvme0", 00:39:12.037 "trtype": "tcp", 00:39:12.037 "traddr": "10.0.0.2", 00:39:12.037 "adrfam": "ipv4", 00:39:12.037 "trsvcid": "4420", 00:39:12.037 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:12.037 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:12.037 "hdgst": false, 00:39:12.037 "ddgst": false 00:39:12.037 }, 00:39:12.037 "method": "bdev_nvme_attach_controller" 00:39:12.037 },{ 00:39:12.037 "params": { 00:39:12.037 "name": "Nvme1", 00:39:12.037 "trtype": "tcp", 00:39:12.037 "traddr": "10.0.0.2", 00:39:12.037 "adrfam": "ipv4", 00:39:12.037 "trsvcid": "4420", 00:39:12.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:12.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:12.037 "hdgst": false, 00:39:12.037 "ddgst": false 00:39:12.037 }, 00:39:12.037 "method": "bdev_nvme_attach_controller" 00:39:12.037 }' 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:12.037 13:50:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:12.294 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:12.294 ... 00:39:12.294 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:12.294 ... 00:39:12.294 fio-3.35 00:39:12.294 Starting 4 threads 00:39:12.294 EAL: No free 2048 kB hugepages reported on node 1 00:39:18.849 00:39:18.849 filename0: (groupid=0, jobs=1): err= 0: pid=475611: Sat Jul 13 13:50:53 2024 00:39:18.849 read: IOPS=1468, BW=11.5MiB/s (12.0MB/s)(57.4MiB/5003msec) 00:39:18.849 slat (nsec): min=6059, max=68666, avg=23070.79, stdev=6705.99 00:39:18.849 clat (usec): min=1165, max=16089, avg=5376.34, stdev=1110.53 00:39:18.849 lat (usec): min=1192, max=16128, avg=5399.41, stdev=1109.70 00:39:18.849 clat percentiles (usec): 00:39:18.849 | 1.00th=[ 3818], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 4686], 00:39:18.849 | 30.00th=[ 4817], 40.00th=[ 4948], 50.00th=[ 5080], 60.00th=[ 5211], 00:39:18.849 | 70.00th=[ 5407], 80.00th=[ 5735], 90.00th=[ 7177], 95.00th=[ 7635], 00:39:18.849 | 99.00th=[ 8717], 99.50th=[ 9241], 99.90th=[15795], 99.95th=[15795], 00:39:18.849 | 99.99th=[16057] 00:39:18.849 bw ( KiB/s): min=11008, max=12240, per=23.92%, avg=11674.67, stdev=421.05, samples=9 00:39:18.849 iops : min= 1376, max= 1530, avg=1459.33, stdev=52.63, samples=9 00:39:18.849 lat (msec) : 2=0.03%, 4=1.56%, 10=98.30%, 20=0.11% 00:39:18.849 cpu : usr=94.48%, sys=4.72%, ctx=48, majf=0, minf=1636 00:39:18.849 IO depths : 1=0.6%, 2=2.3%, 4=70.7%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.849 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.849 issued rwts: total=7349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.849 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:18.849 filename0: (groupid=0, jobs=1): err= 0: pid=475612: Sat Jul 13 13:50:53 2024 00:39:18.849 read: IOPS=1554, BW=12.1MiB/s (12.7MB/s)(60.7MiB/5001msec) 00:39:18.849 slat (nsec): min=6016, max=65231, avg=17970.43, stdev=6674.24 00:39:18.849 clat (usec): min=1056, max=14142, avg=5091.42, stdev=939.15 00:39:18.849 lat (usec): min=1077, max=14161, avg=5109.39, stdev=938.82 00:39:18.849 clat percentiles (usec): 00:39:18.849 | 1.00th=[ 3326], 5.00th=[ 3916], 10.00th=[ 4228], 20.00th=[ 4424], 00:39:18.849 | 30.00th=[ 4686], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5080], 00:39:18.849 | 70.00th=[ 5211], 80.00th=[ 5473], 90.00th=[ 6390], 95.00th=[ 7046], 00:39:18.849 | 99.00th=[ 7767], 99.50th=[ 8029], 99.90th=[13960], 99.95th=[14091], 00:39:18.849 | 99.99th=[14091] 00:39:18.849 bw ( KiB/s): min=11888, max=13104, per=25.45%, avg=12420.33, stdev=398.54, samples=9 00:39:18.849 iops : min= 1486, max= 1638, avg=1552.44, stdev=49.86, samples=9 00:39:18.849 lat (msec) : 2=0.04%, 4=6.33%, 10=93.52%, 20=0.12% 00:39:18.849 cpu : usr=95.02%, sys=4.42%, ctx=15, majf=0, minf=1634 00:39:18.849 IO depths : 1=0.1%, 2=5.5%, 4=66.6%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.849 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.849 issued rwts: total=7772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.849 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:18.849 filename1: (groupid=0, jobs=1): err= 0: pid=475613: Sat Jul 13 13:50:53 2024 00:39:18.849 read: IOPS=1612, BW=12.6MiB/s (13.2MB/s)(63.0MiB/5002msec) 00:39:18.849 slat (nsec): min=5645, max=60171, avg=17139.35, stdev=5929.68 00:39:18.849 clat (usec): min=1414, max=15074, avg=4908.50, stdev=951.90 00:39:18.849 lat (usec): min=1434, max=15107, avg=4925.63, stdev=952.05 00:39:18.849 clat percentiles (usec): 00:39:18.849 | 1.00th=[ 3064], 5.00th=[ 3589], 10.00th=[ 3884], 20.00th=[ 4228], 00:39:18.849 | 30.00th=[ 4424], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 5014], 00:39:18.849 | 70.00th=[ 5145], 80.00th=[ 5342], 90.00th=[ 6128], 95.00th=[ 6718], 00:39:18.849 | 99.00th=[ 7570], 99.50th=[ 7963], 99.90th=[ 9896], 99.95th=[14746], 00:39:18.849 | 99.99th=[15139] 00:39:18.849 bw ( KiB/s): min=12144, max=13600, per=26.42%, avg=12893.44, stdev=491.74, samples=9 00:39:18.849 iops : min= 1518, max= 1700, avg=1611.67, stdev=61.47, samples=9 00:39:18.850 lat (msec) : 2=0.02%, 4=13.56%, 10=86.31%, 20=0.10% 00:39:18.850 cpu : usr=95.50%, sys=3.94%, ctx=6, majf=0, minf=1636 00:39:18.850 IO depths : 1=0.1%, 2=7.0%, 4=63.9%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.850 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.850 issued rwts: total=8065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.850 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:18.850 filename1: (groupid=0, jobs=1): err= 0: pid=475614: Sat Jul 13 13:50:53 2024 00:39:18.850 read: IOPS=1465, BW=11.4MiB/s (12.0MB/s)(57.3MiB/5002msec) 00:39:18.850 slat (usec): min=5, max=136, avg=22.06, stdev= 7.24 00:39:18.850 clat (usec): min=943, max=13337, avg=5392.60, stdev=1000.04 00:39:18.850 lat (usec): min=984, max=13368, avg=5414.65, stdev=998.88 00:39:18.850 clat percentiles (usec): 00:39:18.850 | 1.00th=[ 3556], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 4686], 00:39:18.850 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5276], 00:39:18.850 | 70.00th=[ 5538], 80.00th=[ 6063], 90.00th=[ 6915], 95.00th=[ 7439], 00:39:18.850 | 99.00th=[ 8356], 99.50th=[ 8717], 99.90th=[13042], 99.95th=[13173], 00:39:18.850 | 99.99th=[13304] 00:39:18.850 bw ( KiB/s): min=11216, max=12464, per=24.21%, avg=11815.11, stdev=351.10, samples=9 00:39:18.850 iops : min= 1402, max= 1558, avg=1476.89, stdev=43.89, samples=9 00:39:18.850 lat (usec) : 1000=0.01% 00:39:18.850 lat (msec) : 2=0.16%, 4=2.18%, 10=97.52%, 20=0.12% 00:39:18.850 cpu : usr=94.44%, sys=4.64%, ctx=9, majf=0, minf=1637 00:39:18.850 IO depths : 1=0.1%, 2=4.8%, 4=66.4%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.850 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.850 issued rwts: total=7331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.850 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:18.850 00:39:18.850 Run status group 0 (all jobs): 00:39:18.850 READ: bw=47.7MiB/s (50.0MB/s), 11.4MiB/s-12.6MiB/s (12.0MB/s-13.2MB/s), io=238MiB (250MB), run=5001-5003msec 00:39:19.417 ----------------------------------------------------- 00:39:19.417 Suppressions used: 00:39:19.417 count bytes template 00:39:19.417 6 52 /usr/src/fio/parse.c 00:39:19.417 1 8 libtcmalloc_minimal.so 00:39:19.417 1 904 libcrypto.so 00:39:19.417 ----------------------------------------------------- 00:39:19.417 00:39:19.417 13:50:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:39:19.417 13:50:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:19.417 13:50:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:19.417 13:50:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:19.417 13:50:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:19.417 13:50:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:19.417 13:50:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.418 00:39:19.418 real 0m27.844s 00:39:19.418 user 4m34.399s 00:39:19.418 sys 0m8.889s 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:19.418 13:50:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:19.418 ************************************ 00:39:19.418 END TEST fio_dif_rand_params 00:39:19.418 ************************************ 00:39:19.418 13:50:54 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:19.418 13:50:54 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:39:19.418 13:50:54 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:19.418 13:50:54 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:19.418 13:50:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:19.418 ************************************ 00:39:19.418 START TEST fio_dif_digest 00:39:19.418 ************************************ 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:19.418 bdev_null0 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:19.418 [2024-07-13 13:50:54.111536] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:19.418 { 00:39:19.418 "params": { 00:39:19.418 "name": "Nvme$subsystem", 00:39:19.418 "trtype": "$TEST_TRANSPORT", 00:39:19.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:19.418 "adrfam": "ipv4", 00:39:19.418 "trsvcid": "$NVMF_PORT", 00:39:19.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:19.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:19.418 "hdgst": ${hdgst:-false}, 00:39:19.418 "ddgst": ${ddgst:-false} 00:39:19.418 }, 00:39:19.418 "method": "bdev_nvme_attach_controller" 00:39:19.418 } 00:39:19.418 EOF 00:39:19.418 )") 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:19.418 "params": { 00:39:19.418 "name": "Nvme0", 00:39:19.418 "trtype": "tcp", 00:39:19.418 "traddr": "10.0.0.2", 00:39:19.418 "adrfam": "ipv4", 00:39:19.418 "trsvcid": "4420", 00:39:19.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:19.418 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:19.418 "hdgst": true, 00:39:19.418 "ddgst": true 00:39:19.418 }, 00:39:19.418 "method": "bdev_nvme_attach_controller" 00:39:19.418 }' 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:19.418 13:50:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:19.677 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:19.677 ... 00:39:19.677 fio-3.35 00:39:19.677 Starting 3 threads 00:39:19.936 EAL: No free 2048 kB hugepages reported on node 1 00:39:32.136 00:39:32.136 filename0: (groupid=0, jobs=1): err= 0: pid=476605: Sat Jul 13 13:51:05 2024 00:39:32.136 read: IOPS=161, BW=20.1MiB/s (21.1MB/s)(202MiB/10049msec) 00:39:32.136 slat (nsec): min=7969, max=62167, avg=26059.27, stdev=6127.50 00:39:32.136 clat (msec): min=10, max=100, avg=18.56, stdev= 6.46 00:39:32.136 lat (msec): min=10, max=100, avg=18.59, stdev= 6.46 00:39:32.136 clat percentiles (msec): 00:39:32.136 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 17], 00:39:32.136 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 18], 60.00th=[ 19], 00:39:32.136 | 70.00th=[ 19], 80.00th=[ 20], 90.00th=[ 20], 95.00th=[ 21], 00:39:32.136 | 99.00th=[ 59], 99.50th=[ 61], 99.90th=[ 100], 99.95th=[ 101], 00:39:32.136 | 99.99th=[ 101] 00:39:32.136 bw ( KiB/s): min=16384, max=23552, per=32.69%, avg=20686.65, stdev=1757.26, samples=20 00:39:32.136 iops : min= 128, max= 184, avg=161.60, stdev=13.74, samples=20 00:39:32.136 lat (msec) : 20=91.85%, 50=6.18%, 100=1.91%, 250=0.06% 00:39:32.136 cpu : usr=94.06%, sys=5.36%, ctx=18, majf=0, minf=1637 00:39:32.136 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:32.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.136 issued rwts: total=1619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:32.136 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:32.136 filename0: (groupid=0, jobs=1): err= 0: pid=476606: Sat Jul 13 13:51:05 2024 00:39:32.136 read: IOPS=175, BW=21.9MiB/s (23.0MB/s)(220MiB/10049msec) 00:39:32.136 slat (nsec): min=10923, max=52599, avg=21677.32, stdev=5646.44 00:39:32.136 clat (usec): min=9887, max=58733, avg=17046.38, stdev=2296.93 00:39:32.136 lat (usec): min=9908, max=58757, avg=17068.06, stdev=2297.15 00:39:32.136 clat percentiles (usec): 00:39:32.136 | 1.00th=[11076], 5.00th=[12780], 10.00th=[14877], 20.00th=[15926], 00:39:32.136 | 30.00th=[16450], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:39:32.136 | 70.00th=[17957], 80.00th=[18482], 90.00th=[19006], 95.00th=[19792], 00:39:32.136 | 99.00th=[20841], 99.50th=[21103], 99.90th=[52167], 99.95th=[58983], 00:39:32.136 | 99.99th=[58983] 00:39:32.136 bw ( KiB/s): min=21248, max=25088, per=35.62%, avg=22540.80, stdev=973.85, samples=20 00:39:32.136 iops : min= 166, max= 196, avg=176.10, stdev= 7.61, samples=20 00:39:32.136 lat (msec) : 10=0.06%, 20=96.65%, 50=3.18%, 100=0.11% 00:39:32.136 cpu : usr=93.05%, sys=6.40%, ctx=17, majf=0, minf=1637 00:39:32.136 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:32.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.136 issued rwts: total=1763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:32.136 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:32.136 filename0: (groupid=0, jobs=1): err= 0: pid=476607: Sat Jul 13 13:51:05 2024 00:39:32.136 read: IOPS=157, BW=19.7MiB/s (20.7MB/s)(198MiB/10047msec) 00:39:32.136 slat (nsec): min=7746, max=51095, avg=21767.32, stdev=5558.89 00:39:32.136 clat (usec): min=11829, max=62300, avg=18950.65, stdev=2972.65 00:39:32.136 lat (usec): min=11849, max=62314, avg=18972.42, stdev=2972.67 00:39:32.136 clat percentiles (usec): 00:39:32.136 | 1.00th=[12780], 5.00th=[15139], 10.00th=[16581], 20.00th=[17433], 00:39:32.136 | 30.00th=[18220], 40.00th=[18482], 50.00th=[19006], 60.00th=[19268], 00:39:32.136 | 70.00th=[19792], 80.00th=[20317], 90.00th=[21103], 95.00th=[21627], 00:39:32.136 | 99.00th=[23200], 99.50th=[26870], 99.90th=[61080], 99.95th=[62129], 00:39:32.136 | 99.99th=[62129] 00:39:32.136 bw ( KiB/s): min=18176, max=22016, per=32.04%, avg=20275.20, stdev=991.83, samples=20 00:39:32.136 iops : min= 142, max= 172, avg=158.40, stdev= 7.75, samples=20 00:39:32.136 lat (msec) : 20=73.71%, 50=25.98%, 100=0.32% 00:39:32.136 cpu : usr=93.13%, sys=6.33%, ctx=23, majf=0, minf=1634 00:39:32.136 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:32.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.136 issued rwts: total=1586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:32.136 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:32.136 00:39:32.137 Run status group 0 (all jobs): 00:39:32.137 READ: bw=61.8MiB/s (64.8MB/s), 19.7MiB/s-21.9MiB/s (20.7MB/s-23.0MB/s), io=621MiB (651MB), run=10047-10049msec 00:39:32.137 ----------------------------------------------------- 00:39:32.137 Suppressions used: 00:39:32.137 count bytes template 00:39:32.137 5 44 /usr/src/fio/parse.c 00:39:32.137 1 8 libtcmalloc_minimal.so 00:39:32.137 1 904 libcrypto.so 00:39:32.137 ----------------------------------------------------- 00:39:32.137 00:39:32.137 13:51:06 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:32.137 13:51:06 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:32.137 13:51:06 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:32.137 13:51:06 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:32.137 13:51:06 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:32.137 13:51:06 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:32.137 13:51:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.137 13:51:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:32.137 13:51:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.137 13:51:06 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:32.137 13:51:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.137 13:51:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:32.137 13:51:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.137 00:39:32.137 real 0m12.509s 00:39:32.137 user 0m30.456s 00:39:32.137 sys 0m2.288s 00:39:32.137 13:51:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:32.137 13:51:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:32.137 ************************************ 00:39:32.137 END TEST fio_dif_digest 00:39:32.137 ************************************ 00:39:32.137 13:51:06 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:32.137 13:51:06 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:32.137 13:51:06 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:32.137 13:51:06 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:32.137 13:51:06 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:39:32.137 13:51:06 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:32.137 13:51:06 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:39:32.137 13:51:06 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:32.137 13:51:06 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:32.137 rmmod nvme_tcp 00:39:32.137 rmmod nvme_fabrics 00:39:32.137 rmmod nvme_keyring 00:39:32.137 13:51:06 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:32.137 13:51:06 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:39:32.137 13:51:06 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:39:32.137 13:51:06 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 469710 ']' 00:39:32.137 13:51:06 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 469710 00:39:32.137 13:51:06 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 469710 ']' 00:39:32.137 13:51:06 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 469710 00:39:32.137 13:51:06 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:39:32.137 13:51:06 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:32.137 13:51:06 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 469710 00:39:32.137 13:51:06 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:32.137 13:51:06 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:32.137 13:51:06 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 469710' 00:39:32.137 killing process with pid 469710 00:39:32.137 13:51:06 nvmf_dif -- common/autotest_common.sh@967 -- # kill 469710 00:39:32.137 13:51:06 nvmf_dif -- common/autotest_common.sh@972 -- # wait 469710 00:39:33.512 13:51:08 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:39:33.512 13:51:08 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:34.470 Waiting for block devices as requested 00:39:34.470 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:39:34.470 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:34.728 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:34.728 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:34.728 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:34.728 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:34.988 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:34.988 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:34.988 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:34.988 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:35.248 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:35.248 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:35.248 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:35.508 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:35.508 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:35.508 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:35.508 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:35.768 13:51:10 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:35.768 13:51:10 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:35.768 13:51:10 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:35.768 13:51:10 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:35.768 13:51:10 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.768 13:51:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:35.768 13:51:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.725 13:51:12 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:37.725 00:39:37.725 real 1m15.387s 00:39:37.725 user 6m43.685s 00:39:37.725 sys 0m19.744s 00:39:37.725 13:51:12 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:37.725 13:51:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:37.725 ************************************ 00:39:37.725 END TEST nvmf_dif 00:39:37.725 ************************************ 00:39:37.725 13:51:12 -- common/autotest_common.sh@1142 -- # return 0 00:39:37.725 13:51:12 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:37.725 13:51:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:37.725 13:51:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:37.725 13:51:12 -- common/autotest_common.sh@10 -- # set +x 00:39:37.985 ************************************ 00:39:37.985 START TEST nvmf_abort_qd_sizes 00:39:37.985 ************************************ 00:39:37.985 13:51:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:37.985 * Looking for test storage... 00:39:37.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:37.985 13:51:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.985 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:37.985 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.985 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.985 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.985 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.985 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.985 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.985 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.985 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.985 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.985 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:39:37.986 13:51:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:39.914 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:39.914 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:39.914 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:39.914 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:39.914 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:39.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:39.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:39:39.915 00:39:39.915 --- 10.0.0.2 ping statistics --- 00:39:39.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:39.915 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:39.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:39.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:39:39.915 00:39:39.915 --- 10.0.0.1 ping statistics --- 00:39:39.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:39.915 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:39:39.915 13:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:41.295 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:41.295 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:41.295 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:41.295 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:41.295 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:41.295 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:41.295 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:41.295 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:41.295 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:41.295 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:41.295 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:41.295 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:41.295 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:41.295 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:41.295 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:41.295 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:41.863 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=482209 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 482209 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 482209 ']' 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:42.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:42.122 13:51:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:42.122 [2024-07-13 13:51:16.863490] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:42.122 [2024-07-13 13:51:16.863641] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:42.383 EAL: No free 2048 kB hugepages reported on node 1 00:39:42.383 [2024-07-13 13:51:16.996860] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:42.643 [2024-07-13 13:51:17.239503] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:42.643 [2024-07-13 13:51:17.239589] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:42.643 [2024-07-13 13:51:17.239618] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:42.643 [2024-07-13 13:51:17.239639] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:42.643 [2024-07-13 13:51:17.239660] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:42.643 [2024-07-13 13:51:17.239791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:42.643 [2024-07-13 13:51:17.239859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:42.643 [2024-07-13 13:51:17.239951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:42.643 [2024-07-13 13:51:17.239960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:43.211 13:51:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:43.211 ************************************ 00:39:43.211 START TEST spdk_target_abort 00:39:43.211 ************************************ 00:39:43.211 13:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:39:43.211 13:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:43.211 13:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:39:43.211 13:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.211 13:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.498 spdk_targetn1 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.498 [2024-07-13 13:51:20.711216] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:46.498 [2024-07-13 13:51:20.756785] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:46.498 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:46.499 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:46.499 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:46.499 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:46.499 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:46.499 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:46.499 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:46.499 13:51:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:46.499 EAL: No free 2048 kB hugepages reported on node 1 00:39:49.786 Initializing NVMe Controllers 00:39:49.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:49.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:49.786 Initialization complete. Launching workers. 00:39:49.786 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8842, failed: 0 00:39:49.786 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1248, failed to submit 7594 00:39:49.786 success 768, unsuccess 480, failed 0 00:39:49.786 13:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:49.786 13:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:49.786 EAL: No free 2048 kB hugepages reported on node 1 00:39:53.069 Initializing NVMe Controllers 00:39:53.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:53.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:53.069 Initialization complete. Launching workers. 00:39:53.069 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8350, failed: 0 00:39:53.069 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1249, failed to submit 7101 00:39:53.069 success 344, unsuccess 905, failed 0 00:39:53.069 13:51:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:53.069 13:51:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:53.069 EAL: No free 2048 kB hugepages reported on node 1 00:39:56.347 Initializing NVMe Controllers 00:39:56.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:56.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:56.348 Initialization complete. Launching workers. 00:39:56.348 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27300, failed: 0 00:39:56.348 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2675, failed to submit 24625 00:39:56.348 success 238, unsuccess 2437, failed 0 00:39:56.348 13:51:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:56.348 13:51:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.348 13:51:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:56.348 13:51:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.348 13:51:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:56.348 13:51:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.348 13:51:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:57.722 13:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:57.722 13:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 482209 00:39:57.722 13:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 482209 ']' 00:39:57.722 13:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 482209 00:39:57.722 13:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:39:57.722 13:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:57.722 13:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 482209 00:39:57.722 13:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:57.722 13:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:57.722 13:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 482209' 00:39:57.722 killing process with pid 482209 00:39:57.722 13:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 482209 00:39:57.722 13:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 482209 00:39:58.685 00:39:58.685 real 0m15.487s 00:39:58.685 user 0m59.547s 00:39:58.685 sys 0m2.668s 00:39:58.685 13:51:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:58.685 13:51:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:58.685 ************************************ 00:39:58.685 END TEST spdk_target_abort 00:39:58.685 ************************************ 00:39:58.685 13:51:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:39:58.686 13:51:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:58.686 13:51:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:58.686 13:51:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:58.686 13:51:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:58.686 ************************************ 00:39:58.686 START TEST kernel_target_abort 00:39:58.686 ************************************ 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:58.686 13:51:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:59.619 Waiting for block devices as requested 00:39:59.876 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:39:59.876 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:00.134 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:00.134 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:00.134 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:00.134 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:00.392 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:00.392 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:00.392 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:00.392 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:00.651 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:00.651 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:00.651 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:00.651 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:00.911 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:00.911 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:00.911 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:40:01.481 No valid GPT data, bailing 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:40:01.481 00:40:01.481 Discovery Log Number of Records 2, Generation counter 2 00:40:01.481 =====Discovery Log Entry 0====== 00:40:01.481 trtype: tcp 00:40:01.481 adrfam: ipv4 00:40:01.481 subtype: current discovery subsystem 00:40:01.481 treq: not specified, sq flow control disable supported 00:40:01.481 portid: 1 00:40:01.481 trsvcid: 4420 00:40:01.481 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:01.481 traddr: 10.0.0.1 00:40:01.481 eflags: none 00:40:01.481 sectype: none 00:40:01.481 =====Discovery Log Entry 1====== 00:40:01.481 trtype: tcp 00:40:01.481 adrfam: ipv4 00:40:01.481 subtype: nvme subsystem 00:40:01.481 treq: not specified, sq flow control disable supported 00:40:01.481 portid: 1 00:40:01.481 trsvcid: 4420 00:40:01.481 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:01.481 traddr: 10.0.0.1 00:40:01.481 eflags: none 00:40:01.481 sectype: none 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:01.481 13:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:01.741 EAL: No free 2048 kB hugepages reported on node 1 00:40:05.031 Initializing NVMe Controllers 00:40:05.031 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:05.031 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:05.031 Initialization complete. Launching workers. 00:40:05.031 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29292, failed: 0 00:40:05.031 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29292, failed to submit 0 00:40:05.031 success 0, unsuccess 29292, failed 0 00:40:05.031 13:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:05.031 13:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:05.031 EAL: No free 2048 kB hugepages reported on node 1 00:40:08.329 Initializing NVMe Controllers 00:40:08.329 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:08.329 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:08.329 Initialization complete. Launching workers. 00:40:08.329 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53077, failed: 0 00:40:08.329 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13366, failed to submit 39711 00:40:08.329 success 0, unsuccess 13366, failed 0 00:40:08.329 13:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:08.329 13:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:08.329 EAL: No free 2048 kB hugepages reported on node 1 00:40:11.613 Initializing NVMe Controllers 00:40:11.613 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:11.613 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:11.613 Initialization complete. Launching workers. 00:40:11.613 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 52180, failed: 0 00:40:11.613 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13022, failed to submit 39158 00:40:11.613 success 0, unsuccess 13022, failed 0 00:40:11.613 13:51:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:40:11.613 13:51:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:11.613 13:51:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:40:11.613 13:51:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:11.613 13:51:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:11.613 13:51:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:11.613 13:51:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:11.613 13:51:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:40:11.613 13:51:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:40:11.613 13:51:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:12.549 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:40:12.549 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:40:12.549 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:40:12.549 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:40:12.549 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:40:12.549 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:40:12.549 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:40:12.549 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:40:12.549 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:40:12.549 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:40:12.549 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:40:12.549 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:40:12.549 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:40:12.549 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:40:12.549 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:40:12.549 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:40:13.486 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:40:13.486 00:40:13.486 real 0m14.788s 00:40:13.486 user 0m5.824s 00:40:13.486 sys 0m3.587s 00:40:13.486 13:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:13.486 13:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:13.486 ************************************ 00:40:13.486 END TEST kernel_target_abort 00:40:13.486 ************************************ 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:13.486 rmmod nvme_tcp 00:40:13.486 rmmod nvme_fabrics 00:40:13.486 rmmod nvme_keyring 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 482209 ']' 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 482209 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 482209 ']' 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 482209 00:40:13.486 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (482209) - No such process 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 482209 is not found' 00:40:13.486 Process with pid 482209 is not found 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:40:13.486 13:51:48 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:14.862 Waiting for block devices as requested 00:40:14.862 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:40:14.862 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:14.862 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:14.862 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:15.121 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:15.121 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:15.121 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:15.121 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:15.380 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:15.380 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:15.380 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:15.380 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:15.640 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:15.640 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:15.640 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:15.640 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:15.905 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:15.905 13:51:50 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:15.905 13:51:50 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:15.905 13:51:50 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:15.905 13:51:50 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:15.905 13:51:50 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:15.905 13:51:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:15.905 13:51:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:18.474 13:51:52 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:18.474 00:40:18.474 real 0m40.155s 00:40:18.474 user 1m7.601s 00:40:18.474 sys 0m9.530s 00:40:18.474 13:51:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:18.474 13:51:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:18.474 ************************************ 00:40:18.474 END TEST nvmf_abort_qd_sizes 00:40:18.474 ************************************ 00:40:18.474 13:51:52 -- common/autotest_common.sh@1142 -- # return 0 00:40:18.474 13:51:52 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:18.474 13:51:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:18.474 13:51:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:18.474 13:51:52 -- common/autotest_common.sh@10 -- # set +x 00:40:18.474 ************************************ 00:40:18.474 START TEST keyring_file 00:40:18.474 ************************************ 00:40:18.474 13:51:52 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:18.474 * Looking for test storage... 00:40:18.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:18.474 13:51:52 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:18.474 13:51:52 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:18.474 13:51:52 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:18.475 13:51:52 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:18.475 13:51:52 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:18.475 13:51:52 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:18.475 13:51:52 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.475 13:51:52 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.475 13:51:52 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.475 13:51:52 keyring_file -- paths/export.sh@5 -- # export PATH 00:40:18.475 13:51:52 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@47 -- # : 0 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:18.475 13:51:52 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:18.475 13:51:52 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:18.475 13:51:52 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:40:18.475 13:51:52 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:40:18.475 13:51:52 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:40:18.475 13:51:52 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.grUdps30PX 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.grUdps30PX 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.grUdps30PX 00:40:18.475 13:51:52 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.grUdps30PX 00:40:18.475 13:51:52 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@17 -- # name=key1 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FJQnFgbfMd 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:40:18.475 13:51:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FJQnFgbfMd 00:40:18.475 13:51:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FJQnFgbfMd 00:40:18.475 13:51:52 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.FJQnFgbfMd 00:40:18.475 13:51:52 keyring_file -- keyring/file.sh@30 -- # tgtpid=488478 00:40:18.475 13:51:52 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:18.475 13:51:52 keyring_file -- keyring/file.sh@32 -- # waitforlisten 488478 00:40:18.475 13:51:52 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 488478 ']' 00:40:18.475 13:51:52 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:18.475 13:51:52 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:18.475 13:51:52 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:18.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:18.475 13:51:52 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:18.475 13:51:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:18.475 [2024-07-13 13:51:52.887385] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:18.475 [2024-07-13 13:51:52.887541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488478 ] 00:40:18.475 EAL: No free 2048 kB hugepages reported on node 1 00:40:18.475 [2024-07-13 13:51:53.016636] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:18.733 [2024-07-13 13:51:53.262991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:40:19.668 13:51:54 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:19.668 [2024-07-13 13:51:54.145703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:19.668 null0 00:40:19.668 [2024-07-13 13:51:54.177706] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:19.668 [2024-07-13 13:51:54.178301] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:19.668 [2024-07-13 13:51:54.185748] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:19.668 13:51:54 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:19.668 [2024-07-13 13:51:54.197765] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:40:19.668 request: 00:40:19.668 { 00:40:19.668 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:40:19.668 "secure_channel": false, 00:40:19.668 "listen_address": { 00:40:19.668 "trtype": "tcp", 00:40:19.668 "traddr": "127.0.0.1", 00:40:19.668 "trsvcid": "4420" 00:40:19.668 }, 00:40:19.668 "method": "nvmf_subsystem_add_listener", 00:40:19.668 "req_id": 1 00:40:19.668 } 00:40:19.668 Got JSON-RPC error response 00:40:19.668 response: 00:40:19.668 { 00:40:19.668 "code": -32602, 00:40:19.668 "message": "Invalid parameters" 00:40:19.668 } 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:19.668 13:51:54 keyring_file -- keyring/file.sh@46 -- # bperfpid=488625 00:40:19.668 13:51:54 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:40:19.668 13:51:54 keyring_file -- keyring/file.sh@48 -- # waitforlisten 488625 /var/tmp/bperf.sock 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 488625 ']' 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:19.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:19.668 13:51:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:19.668 [2024-07-13 13:51:54.281459] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:19.668 [2024-07-13 13:51:54.281606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488625 ] 00:40:19.668 EAL: No free 2048 kB hugepages reported on node 1 00:40:19.668 [2024-07-13 13:51:54.400321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.927 [2024-07-13 13:51:54.621259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:20.494 13:51:55 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:20.494 13:51:55 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:40:20.494 13:51:55 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.grUdps30PX 00:40:20.494 13:51:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.grUdps30PX 00:40:20.752 13:51:55 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FJQnFgbfMd 00:40:20.752 13:51:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FJQnFgbfMd 00:40:21.010 13:51:55 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:40:21.010 13:51:55 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:40:21.010 13:51:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:21.010 13:51:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:21.010 13:51:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:21.268 13:51:55 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.grUdps30PX == \/\t\m\p\/\t\m\p\.\g\r\U\d\p\s\3\0\P\X ]] 00:40:21.268 13:51:55 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:40:21.268 13:51:55 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:40:21.268 13:51:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:21.268 13:51:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:21.268 13:51:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:21.526 13:51:56 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.FJQnFgbfMd == \/\t\m\p\/\t\m\p\.\F\J\Q\n\F\g\b\f\M\d ]] 00:40:21.526 13:51:56 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:40:21.526 13:51:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:21.526 13:51:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:21.526 13:51:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:21.526 13:51:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:21.526 13:51:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:21.783 13:51:56 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:40:21.783 13:51:56 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:40:21.783 13:51:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:21.783 13:51:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:21.783 13:51:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:21.783 13:51:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:21.783 13:51:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:22.041 13:51:56 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:40:22.041 13:51:56 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:22.041 13:51:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:22.298 [2024-07-13 13:51:56.907202] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:22.298 nvme0n1 00:40:22.298 13:51:57 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:40:22.298 13:51:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:22.298 13:51:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:22.298 13:51:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:22.298 13:51:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:22.298 13:51:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:22.556 13:51:57 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:40:22.556 13:51:57 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:40:22.556 13:51:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:22.556 13:51:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:22.556 13:51:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:22.556 13:51:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:22.556 13:51:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:22.814 13:51:57 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:40:22.814 13:51:57 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:23.072 Running I/O for 1 seconds... 00:40:24.008 00:40:24.008 Latency(us) 00:40:24.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:24.008 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:40:24.008 nvme0n1 : 1.03 3610.75 14.10 0.00 0.00 34901.24 5121.52 38447.79 00:40:24.008 =================================================================================================================== 00:40:24.008 Total : 3610.75 14.10 0.00 0.00 34901.24 5121.52 38447.79 00:40:24.008 0 00:40:24.008 13:51:58 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:24.008 13:51:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:24.267 13:51:58 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:40:24.267 13:51:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:24.267 13:51:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:24.267 13:51:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:24.267 13:51:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:24.267 13:51:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:24.525 13:51:59 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:40:24.525 13:51:59 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:40:24.525 13:51:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:24.525 13:51:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:24.525 13:51:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:24.525 13:51:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:24.525 13:51:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:24.783 13:51:59 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:40:24.783 13:51:59 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:24.783 13:51:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:24.783 13:51:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:24.783 13:51:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:24.783 13:51:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:24.783 13:51:59 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:24.783 13:51:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:24.783 13:51:59 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:24.783 13:51:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:25.042 [2024-07-13 13:51:59.663781] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:25.042 [2024-07-13 13:51:59.663893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (107): Transport endpoint is not connected 00:40:25.042 [2024-07-13 13:51:59.664862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:40:25.042 [2024-07-13 13:51:59.665845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:25.042 [2024-07-13 13:51:59.665898] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:25.042 [2024-07-13 13:51:59.665918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:25.042 request: 00:40:25.042 { 00:40:25.042 "name": "nvme0", 00:40:25.042 "trtype": "tcp", 00:40:25.042 "traddr": "127.0.0.1", 00:40:25.042 "adrfam": "ipv4", 00:40:25.042 "trsvcid": "4420", 00:40:25.042 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:25.042 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:25.042 "prchk_reftag": false, 00:40:25.042 "prchk_guard": false, 00:40:25.042 "hdgst": false, 00:40:25.042 "ddgst": false, 00:40:25.042 "psk": "key1", 00:40:25.042 "method": "bdev_nvme_attach_controller", 00:40:25.042 "req_id": 1 00:40:25.042 } 00:40:25.042 Got JSON-RPC error response 00:40:25.042 response: 00:40:25.042 { 00:40:25.042 "code": -5, 00:40:25.042 "message": "Input/output error" 00:40:25.042 } 00:40:25.042 13:51:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:25.042 13:51:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:25.042 13:51:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:25.042 13:51:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:25.042 13:51:59 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:40:25.042 13:51:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:25.042 13:51:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:25.042 13:51:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:25.042 13:51:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:25.042 13:51:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:25.301 13:51:59 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:40:25.301 13:51:59 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:40:25.301 13:51:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:25.301 13:51:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:25.301 13:51:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:25.301 13:51:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:25.301 13:51:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:25.559 13:52:00 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:40:25.559 13:52:00 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:40:25.559 13:52:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:25.818 13:52:00 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:40:25.818 13:52:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:40:26.076 13:52:00 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:40:26.076 13:52:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:26.076 13:52:00 keyring_file -- keyring/file.sh@77 -- # jq length 00:40:26.334 13:52:00 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:40:26.334 13:52:00 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.grUdps30PX 00:40:26.334 13:52:00 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.grUdps30PX 00:40:26.334 13:52:00 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:26.334 13:52:00 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.grUdps30PX 00:40:26.334 13:52:00 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:26.334 13:52:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:26.334 13:52:00 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:26.334 13:52:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:26.334 13:52:00 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.grUdps30PX 00:40:26.334 13:52:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.grUdps30PX 00:40:26.592 [2024-07-13 13:52:01.159522] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.grUdps30PX': 0100660 00:40:26.592 [2024-07-13 13:52:01.159578] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:40:26.592 request: 00:40:26.592 { 00:40:26.592 "name": "key0", 00:40:26.592 "path": "/tmp/tmp.grUdps30PX", 00:40:26.592 "method": "keyring_file_add_key", 00:40:26.592 "req_id": 1 00:40:26.592 } 00:40:26.592 Got JSON-RPC error response 00:40:26.592 response: 00:40:26.592 { 00:40:26.592 "code": -1, 00:40:26.592 "message": "Operation not permitted" 00:40:26.592 } 00:40:26.592 13:52:01 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:26.592 13:52:01 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:26.592 13:52:01 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:26.592 13:52:01 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:26.592 13:52:01 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.grUdps30PX 00:40:26.592 13:52:01 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.grUdps30PX 00:40:26.592 13:52:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.grUdps30PX 00:40:26.850 13:52:01 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.grUdps30PX 00:40:26.850 13:52:01 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:40:26.850 13:52:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:26.850 13:52:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:26.850 13:52:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:26.850 13:52:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:26.850 13:52:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:27.108 13:52:01 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:40:27.108 13:52:01 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:27.108 13:52:01 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:27.108 13:52:01 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:27.108 13:52:01 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:27.108 13:52:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:27.108 13:52:01 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:27.108 13:52:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:27.108 13:52:01 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:27.108 13:52:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:27.365 [2024-07-13 13:52:01.917752] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.grUdps30PX': No such file or directory 00:40:27.365 [2024-07-13 13:52:01.917819] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:40:27.365 [2024-07-13 13:52:01.917856] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:40:27.365 [2024-07-13 13:52:01.917897] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:27.365 [2024-07-13 13:52:01.917918] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:40:27.365 request: 00:40:27.365 { 00:40:27.365 "name": "nvme0", 00:40:27.365 "trtype": "tcp", 00:40:27.365 "traddr": "127.0.0.1", 00:40:27.365 "adrfam": "ipv4", 00:40:27.365 "trsvcid": "4420", 00:40:27.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:27.365 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:27.365 "prchk_reftag": false, 00:40:27.365 "prchk_guard": false, 00:40:27.365 "hdgst": false, 00:40:27.365 "ddgst": false, 00:40:27.365 "psk": "key0", 00:40:27.365 "method": "bdev_nvme_attach_controller", 00:40:27.365 "req_id": 1 00:40:27.365 } 00:40:27.365 Got JSON-RPC error response 00:40:27.365 response: 00:40:27.365 { 00:40:27.365 "code": -19, 00:40:27.365 "message": "No such device" 00:40:27.365 } 00:40:27.365 13:52:01 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:27.365 13:52:01 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:27.365 13:52:01 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:27.365 13:52:01 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:27.365 13:52:01 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:40:27.365 13:52:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:27.623 13:52:02 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:27.623 13:52:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:27.623 13:52:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:27.623 13:52:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:27.623 13:52:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:27.623 13:52:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:27.623 13:52:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MwuPThkEPK 00:40:27.623 13:52:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:27.623 13:52:02 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:27.623 13:52:02 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:40:27.623 13:52:02 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:27.623 13:52:02 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:40:27.623 13:52:02 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:40:27.623 13:52:02 keyring_file -- nvmf/common.sh@705 -- # python - 00:40:27.623 13:52:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MwuPThkEPK 00:40:27.623 13:52:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MwuPThkEPK 00:40:27.623 13:52:02 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.MwuPThkEPK 00:40:27.623 13:52:02 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MwuPThkEPK 00:40:27.623 13:52:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MwuPThkEPK 00:40:27.880 13:52:02 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:27.880 13:52:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:28.138 nvme0n1 00:40:28.138 13:52:02 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:40:28.138 13:52:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:28.138 13:52:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:28.138 13:52:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:28.138 13:52:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:28.138 13:52:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:28.395 13:52:03 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:40:28.395 13:52:03 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:40:28.395 13:52:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:28.653 13:52:03 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:40:28.653 13:52:03 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:40:28.653 13:52:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:28.653 13:52:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:28.653 13:52:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:28.911 13:52:03 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:40:28.911 13:52:03 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:40:28.911 13:52:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:28.911 13:52:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:28.911 13:52:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:28.911 13:52:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:28.911 13:52:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:29.168 13:52:03 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:40:29.168 13:52:03 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:29.168 13:52:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:29.425 13:52:04 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:40:29.425 13:52:04 keyring_file -- keyring/file.sh@104 -- # jq length 00:40:29.425 13:52:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:29.682 13:52:04 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:40:29.682 13:52:04 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MwuPThkEPK 00:40:29.682 13:52:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MwuPThkEPK 00:40:29.969 13:52:04 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FJQnFgbfMd 00:40:29.969 13:52:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FJQnFgbfMd 00:40:30.228 13:52:04 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:30.228 13:52:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:30.486 nvme0n1 00:40:30.486 13:52:05 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:40:30.486 13:52:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:40:30.744 13:52:05 keyring_file -- keyring/file.sh@112 -- # config='{ 00:40:30.744 "subsystems": [ 00:40:30.744 { 00:40:30.744 "subsystem": "keyring", 00:40:30.744 "config": [ 00:40:30.744 { 00:40:30.744 "method": "keyring_file_add_key", 00:40:30.744 "params": { 00:40:30.744 "name": "key0", 00:40:30.744 "path": "/tmp/tmp.MwuPThkEPK" 00:40:30.744 } 00:40:30.744 }, 00:40:30.744 { 00:40:30.744 "method": "keyring_file_add_key", 00:40:30.744 "params": { 00:40:30.744 "name": "key1", 00:40:30.744 "path": "/tmp/tmp.FJQnFgbfMd" 00:40:30.744 } 00:40:30.744 } 00:40:30.744 ] 00:40:30.744 }, 00:40:30.744 { 00:40:30.744 "subsystem": "iobuf", 00:40:30.744 "config": [ 00:40:30.744 { 00:40:30.744 "method": "iobuf_set_options", 00:40:30.744 "params": { 00:40:30.744 "small_pool_count": 8192, 00:40:30.744 "large_pool_count": 1024, 00:40:30.744 "small_bufsize": 8192, 00:40:30.744 "large_bufsize": 135168 00:40:30.744 } 00:40:30.744 } 00:40:30.744 ] 00:40:30.744 }, 00:40:30.744 { 00:40:30.744 "subsystem": "sock", 00:40:30.744 "config": [ 00:40:30.744 { 00:40:30.745 "method": "sock_set_default_impl", 00:40:30.745 "params": { 00:40:30.745 "impl_name": "posix" 00:40:30.745 } 00:40:30.745 }, 00:40:30.745 { 00:40:30.745 "method": "sock_impl_set_options", 00:40:30.745 "params": { 00:40:30.745 "impl_name": "ssl", 00:40:30.745 "recv_buf_size": 4096, 00:40:30.745 "send_buf_size": 4096, 00:40:30.745 "enable_recv_pipe": true, 00:40:30.745 "enable_quickack": false, 00:40:30.745 "enable_placement_id": 0, 00:40:30.745 "enable_zerocopy_send_server": true, 00:40:30.745 "enable_zerocopy_send_client": false, 00:40:30.745 "zerocopy_threshold": 0, 00:40:30.745 "tls_version": 0, 00:40:30.745 "enable_ktls": false 00:40:30.745 } 00:40:30.745 }, 00:40:30.745 { 00:40:30.745 "method": "sock_impl_set_options", 00:40:30.745 "params": { 00:40:30.745 "impl_name": "posix", 00:40:30.745 "recv_buf_size": 2097152, 00:40:30.745 "send_buf_size": 2097152, 00:40:30.745 "enable_recv_pipe": true, 00:40:30.745 "enable_quickack": false, 00:40:30.745 "enable_placement_id": 0, 00:40:30.745 "enable_zerocopy_send_server": true, 00:40:30.745 "enable_zerocopy_send_client": false, 00:40:30.745 "zerocopy_threshold": 0, 00:40:30.745 "tls_version": 0, 00:40:30.745 "enable_ktls": false 00:40:30.745 } 00:40:30.745 } 00:40:30.745 ] 00:40:30.745 }, 00:40:30.745 { 00:40:30.745 "subsystem": "vmd", 00:40:30.745 "config": [] 00:40:30.745 }, 00:40:30.745 { 00:40:30.745 "subsystem": "accel", 00:40:30.745 "config": [ 00:40:30.745 { 00:40:30.745 "method": "accel_set_options", 00:40:30.745 "params": { 00:40:30.745 "small_cache_size": 128, 00:40:30.745 "large_cache_size": 16, 00:40:30.745 "task_count": 2048, 00:40:30.745 "sequence_count": 2048, 00:40:30.745 "buf_count": 2048 00:40:30.745 } 00:40:30.745 } 00:40:30.745 ] 00:40:30.745 }, 00:40:30.745 { 00:40:30.745 "subsystem": "bdev", 00:40:30.745 "config": [ 00:40:30.745 { 00:40:30.745 "method": "bdev_set_options", 00:40:30.745 "params": { 00:40:30.745 "bdev_io_pool_size": 65535, 00:40:30.745 "bdev_io_cache_size": 256, 00:40:30.745 "bdev_auto_examine": true, 00:40:30.745 "iobuf_small_cache_size": 128, 00:40:30.745 "iobuf_large_cache_size": 16 00:40:30.745 } 00:40:30.745 }, 00:40:30.745 { 00:40:30.745 "method": "bdev_raid_set_options", 00:40:30.745 "params": { 00:40:30.745 "process_window_size_kb": 1024 00:40:30.745 } 00:40:30.745 }, 00:40:30.745 { 00:40:30.745 "method": "bdev_iscsi_set_options", 00:40:30.745 "params": { 00:40:30.745 "timeout_sec": 30 00:40:30.745 } 00:40:30.745 }, 00:40:30.745 { 00:40:30.745 "method": "bdev_nvme_set_options", 00:40:30.745 "params": { 00:40:30.745 "action_on_timeout": "none", 00:40:30.745 "timeout_us": 0, 00:40:30.745 "timeout_admin_us": 0, 00:40:30.745 "keep_alive_timeout_ms": 10000, 00:40:30.745 "arbitration_burst": 0, 00:40:30.745 "low_priority_weight": 0, 00:40:30.745 "medium_priority_weight": 0, 00:40:30.745 "high_priority_weight": 0, 00:40:30.745 "nvme_adminq_poll_period_us": 10000, 00:40:30.745 "nvme_ioq_poll_period_us": 0, 00:40:30.745 "io_queue_requests": 512, 00:40:30.745 "delay_cmd_submit": true, 00:40:30.745 "transport_retry_count": 4, 00:40:30.745 "bdev_retry_count": 3, 00:40:30.745 "transport_ack_timeout": 0, 00:40:30.745 "ctrlr_loss_timeout_sec": 0, 00:40:30.745 "reconnect_delay_sec": 0, 00:40:30.745 "fast_io_fail_timeout_sec": 0, 00:40:30.745 "disable_auto_failback": false, 00:40:30.745 "generate_uuids": false, 00:40:30.745 "transport_tos": 0, 00:40:30.745 "nvme_error_stat": false, 00:40:30.745 "rdma_srq_size": 0, 00:40:30.745 "io_path_stat": false, 00:40:30.745 "allow_accel_sequence": false, 00:40:30.745 "rdma_max_cq_size": 0, 00:40:30.745 "rdma_cm_event_timeout_ms": 0, 00:40:30.745 "dhchap_digests": [ 00:40:30.745 "sha256", 00:40:30.745 "sha384", 00:40:30.745 "sha512" 00:40:30.745 ], 00:40:30.745 "dhchap_dhgroups": [ 00:40:30.745 "null", 00:40:30.745 "ffdhe2048", 00:40:30.745 "ffdhe3072", 00:40:30.745 "ffdhe4096", 00:40:30.745 "ffdhe6144", 00:40:30.745 "ffdhe8192" 00:40:30.745 ] 00:40:30.745 } 00:40:30.745 }, 00:40:30.745 { 00:40:30.745 "method": "bdev_nvme_attach_controller", 00:40:30.745 "params": { 00:40:30.745 "name": "nvme0", 00:40:30.745 "trtype": "TCP", 00:40:30.745 "adrfam": "IPv4", 00:40:30.745 "traddr": "127.0.0.1", 00:40:30.745 "trsvcid": "4420", 00:40:30.745 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:30.745 "prchk_reftag": false, 00:40:30.745 "prchk_guard": false, 00:40:30.745 "ctrlr_loss_timeout_sec": 0, 00:40:30.745 "reconnect_delay_sec": 0, 00:40:30.745 "fast_io_fail_timeout_sec": 0, 00:40:30.745 "psk": "key0", 00:40:30.745 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:30.745 "hdgst": false, 00:40:30.745 "ddgst": false 00:40:30.745 } 00:40:30.745 }, 00:40:30.745 { 00:40:30.745 "method": "bdev_nvme_set_hotplug", 00:40:30.745 "params": { 00:40:30.745 "period_us": 100000, 00:40:30.745 "enable": false 00:40:30.745 } 00:40:30.745 }, 00:40:30.745 { 00:40:30.745 "method": "bdev_wait_for_examine" 00:40:30.745 } 00:40:30.745 ] 00:40:30.745 }, 00:40:30.745 { 00:40:30.745 "subsystem": "nbd", 00:40:30.745 "config": [] 00:40:30.745 } 00:40:30.745 ] 00:40:30.745 }' 00:40:30.745 13:52:05 keyring_file -- keyring/file.sh@114 -- # killprocess 488625 00:40:30.745 13:52:05 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 488625 ']' 00:40:30.745 13:52:05 keyring_file -- common/autotest_common.sh@952 -- # kill -0 488625 00:40:30.745 13:52:05 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:30.745 13:52:05 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:30.745 13:52:05 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 488625 00:40:30.745 13:52:05 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:30.745 13:52:05 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:30.745 13:52:05 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 488625' 00:40:30.745 killing process with pid 488625 00:40:30.745 13:52:05 keyring_file -- common/autotest_common.sh@967 -- # kill 488625 00:40:30.745 Received shutdown signal, test time was about 1.000000 seconds 00:40:30.745 00:40:30.745 Latency(us) 00:40:30.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:30.745 =================================================================================================================== 00:40:30.745 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:30.745 13:52:05 keyring_file -- common/autotest_common.sh@972 -- # wait 488625 00:40:32.129 13:52:06 keyring_file -- keyring/file.sh@117 -- # bperfpid=490215 00:40:32.129 13:52:06 keyring_file -- keyring/file.sh@119 -- # waitforlisten 490215 /var/tmp/bperf.sock 00:40:32.129 13:52:06 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 490215 ']' 00:40:32.129 13:52:06 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:40:32.129 13:52:06 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:32.129 13:52:06 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:32.129 13:52:06 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:40:32.129 "subsystems": [ 00:40:32.129 { 00:40:32.129 "subsystem": "keyring", 00:40:32.129 "config": [ 00:40:32.129 { 00:40:32.129 "method": "keyring_file_add_key", 00:40:32.129 "params": { 00:40:32.129 "name": "key0", 00:40:32.129 "path": "/tmp/tmp.MwuPThkEPK" 00:40:32.129 } 00:40:32.129 }, 00:40:32.129 { 00:40:32.129 "method": "keyring_file_add_key", 00:40:32.129 "params": { 00:40:32.129 "name": "key1", 00:40:32.129 "path": "/tmp/tmp.FJQnFgbfMd" 00:40:32.129 } 00:40:32.129 } 00:40:32.129 ] 00:40:32.129 }, 00:40:32.129 { 00:40:32.129 "subsystem": "iobuf", 00:40:32.129 "config": [ 00:40:32.129 { 00:40:32.129 "method": "iobuf_set_options", 00:40:32.129 "params": { 00:40:32.129 "small_pool_count": 8192, 00:40:32.129 "large_pool_count": 1024, 00:40:32.129 "small_bufsize": 8192, 00:40:32.129 "large_bufsize": 135168 00:40:32.129 } 00:40:32.129 } 00:40:32.129 ] 00:40:32.129 }, 00:40:32.129 { 00:40:32.129 "subsystem": "sock", 00:40:32.129 "config": [ 00:40:32.129 { 00:40:32.129 "method": "sock_set_default_impl", 00:40:32.129 "params": { 00:40:32.129 "impl_name": "posix" 00:40:32.129 } 00:40:32.129 }, 00:40:32.129 { 00:40:32.129 "method": "sock_impl_set_options", 00:40:32.129 "params": { 00:40:32.129 "impl_name": "ssl", 00:40:32.129 "recv_buf_size": 4096, 00:40:32.129 "send_buf_size": 4096, 00:40:32.129 "enable_recv_pipe": true, 00:40:32.129 "enable_quickack": false, 00:40:32.129 "enable_placement_id": 0, 00:40:32.129 "enable_zerocopy_send_server": true, 00:40:32.129 "enable_zerocopy_send_client": false, 00:40:32.129 "zerocopy_threshold": 0, 00:40:32.129 "tls_version": 0, 00:40:32.129 "enable_ktls": false 00:40:32.129 } 00:40:32.129 }, 00:40:32.129 { 00:40:32.129 "method": "sock_impl_set_options", 00:40:32.129 "params": { 00:40:32.129 "impl_name": "posix", 00:40:32.129 "recv_buf_size": 2097152, 00:40:32.129 "send_buf_size": 2097152, 00:40:32.129 "enable_recv_pipe": true, 00:40:32.129 "enable_quickack": false, 00:40:32.129 "enable_placement_id": 0, 00:40:32.129 "enable_zerocopy_send_server": true, 00:40:32.129 "enable_zerocopy_send_client": false, 00:40:32.129 "zerocopy_threshold": 0, 00:40:32.129 "tls_version": 0, 00:40:32.129 "enable_ktls": false 00:40:32.129 } 00:40:32.129 } 00:40:32.129 ] 00:40:32.129 }, 00:40:32.129 { 00:40:32.129 "subsystem": "vmd", 00:40:32.129 "config": [] 00:40:32.129 }, 00:40:32.129 { 00:40:32.129 "subsystem": "accel", 00:40:32.129 "config": [ 00:40:32.129 { 00:40:32.129 "method": "accel_set_options", 00:40:32.129 "params": { 00:40:32.129 "small_cache_size": 128, 00:40:32.129 "large_cache_size": 16, 00:40:32.129 "task_count": 2048, 00:40:32.129 "sequence_count": 2048, 00:40:32.129 "buf_count": 2048 00:40:32.129 } 00:40:32.129 } 00:40:32.129 ] 00:40:32.129 }, 00:40:32.129 { 00:40:32.129 "subsystem": "bdev", 00:40:32.129 "config": [ 00:40:32.129 { 00:40:32.129 "method": "bdev_set_options", 00:40:32.129 "params": { 00:40:32.129 "bdev_io_pool_size": 65535, 00:40:32.129 "bdev_io_cache_size": 256, 00:40:32.129 "bdev_auto_examine": true, 00:40:32.129 "iobuf_small_cache_size": 128, 00:40:32.129 "iobuf_large_cache_size": 16 00:40:32.129 } 00:40:32.129 }, 00:40:32.129 { 00:40:32.129 "method": "bdev_raid_set_options", 00:40:32.129 "params": { 00:40:32.129 "process_window_size_kb": 1024 00:40:32.129 } 00:40:32.129 }, 00:40:32.129 { 00:40:32.129 "method": "bdev_iscsi_set_options", 00:40:32.129 "params": { 00:40:32.129 "timeout_sec": 30 00:40:32.129 } 00:40:32.129 }, 00:40:32.129 { 00:40:32.129 "method": "bdev_nvme_set_options", 00:40:32.129 "params": { 00:40:32.129 "action_on_timeout": "none", 00:40:32.129 "timeout_us": 0, 00:40:32.129 "timeout_admin_us": 0, 00:40:32.129 "keep_alive_timeout_ms": 10000, 00:40:32.129 "arbitration_burst": 0, 00:40:32.129 "low_priority_weight": 0, 00:40:32.129 "medium_priority_weight": 0, 00:40:32.129 "high_priority_weight": 0, 00:40:32.129 "nvme_adminq_poll_period_us": 10000, 00:40:32.129 "nvme_ioq_poll_period_us": 0, 00:40:32.129 "io_queue_requests": 512, 00:40:32.129 "delay_cmd_submit": true, 00:40:32.130 "transport_retry_count": 4, 00:40:32.130 "bdev_retry_count": 3, 00:40:32.130 "transport_ack_timeout": 0, 00:40:32.130 "ctrlr_loss_timeout_sec": 0, 00:40:32.130 "reconnect_delay_sec": 0, 00:40:32.130 "fast_io_fail_timeout_sec": 0, 00:40:32.130 "disable_auto_failback": false, 00:40:32.130 "generate_uuids": false, 00:40:32.130 "transport_tos": 0, 00:40:32.130 "nvme_error_stat": false, 00:40:32.130 "rdma_srq_size": 0, 00:40:32.130 "io_path_stat": false, 00:40:32.130 "allow_accel_sequence": false, 00:40:32.130 "rdma_max_cq_size": 0, 00:40:32.130 "rdma_cm_event_timeout_ms": 0, 00:40:32.130 "dhchap_digests": [ 00:40:32.130 "sha256", 00:40:32.130 "sha384", 00:40:32.130 "sha512" 00:40:32.130 ], 00:40:32.130 "dhchap_dhgroups": [ 00:40:32.130 "null", 00:40:32.130 "ffdhe2048", 00:40:32.130 "ffdhe3072", 00:40:32.130 "ffdhe4096", 00:40:32.130 "ffdhe6144", 00:40:32.130 "ffdhe8192" 00:40:32.130 ] 00:40:32.130 } 00:40:32.130 }, 00:40:32.130 { 00:40:32.130 "method": "bdev_nvme_attach_controller", 00:40:32.130 "params": { 00:40:32.130 "name": "nvme0", 00:40:32.130 "trtype": "TCP", 00:40:32.130 "adrfam": "IPv4", 00:40:32.130 "traddr": "127.0.0.1", 00:40:32.130 "trsvcid": "4420", 00:40:32.130 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:32.130 "prchk_reftag": false, 00:40:32.130 "prchk_guard": false, 00:40:32.130 "ctrlr_loss_timeout_sec": 0, 00:40:32.130 "reconnect_delay_sec": 0, 00:40:32.130 "fast_io_fail_timeout_sec": 0, 00:40:32.130 "psk": "key0", 00:40:32.130 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:32.130 "hdgst": false, 00:40:32.130 "ddgst": false 00:40:32.130 } 00:40:32.130 }, 00:40:32.130 { 00:40:32.130 "method": "bdev_nvme_set_hotplug", 00:40:32.130 "params": { 00:40:32.130 "period_us": 100000, 00:40:32.130 "enable": false 00:40:32.130 } 00:40:32.130 }, 00:40:32.130 { 00:40:32.130 "method": "bdev_wait_for_examine" 00:40:32.130 } 00:40:32.130 ] 00:40:32.130 }, 00:40:32.130 { 00:40:32.130 "subsystem": "nbd", 00:40:32.130 "config": [] 00:40:32.130 } 00:40:32.130 ] 00:40:32.130 }' 00:40:32.130 13:52:06 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:32.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:32.130 13:52:06 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:32.130 13:52:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:32.130 [2024-07-13 13:52:06.581464] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:32.130 [2024-07-13 13:52:06.581615] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490215 ] 00:40:32.130 EAL: No free 2048 kB hugepages reported on node 1 00:40:32.130 [2024-07-13 13:52:06.707434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:32.389 [2024-07-13 13:52:06.935162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:32.647 [2024-07-13 13:52:07.341732] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:32.905 13:52:07 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:32.905 13:52:07 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:40:32.905 13:52:07 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:40:32.905 13:52:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:32.905 13:52:07 keyring_file -- keyring/file.sh@120 -- # jq length 00:40:33.163 13:52:07 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:40:33.163 13:52:07 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:40:33.163 13:52:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:33.163 13:52:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:33.163 13:52:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:33.163 13:52:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:33.164 13:52:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:33.422 13:52:08 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:33.422 13:52:08 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:40:33.422 13:52:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:33.422 13:52:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:33.422 13:52:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:33.422 13:52:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:33.422 13:52:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:33.680 13:52:08 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:40:33.680 13:52:08 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:40:33.680 13:52:08 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:40:33.680 13:52:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:33.938 13:52:08 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:40:33.938 13:52:08 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:33.938 13:52:08 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.MwuPThkEPK /tmp/tmp.FJQnFgbfMd 00:40:33.938 13:52:08 keyring_file -- keyring/file.sh@20 -- # killprocess 490215 00:40:33.938 13:52:08 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 490215 ']' 00:40:33.938 13:52:08 keyring_file -- common/autotest_common.sh@952 -- # kill -0 490215 00:40:33.938 13:52:08 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:33.938 13:52:08 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:33.938 13:52:08 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 490215 00:40:33.938 13:52:08 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:33.938 13:52:08 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:33.938 13:52:08 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 490215' 00:40:33.938 killing process with pid 490215 00:40:33.938 13:52:08 keyring_file -- common/autotest_common.sh@967 -- # kill 490215 00:40:33.938 Received shutdown signal, test time was about 1.000000 seconds 00:40:33.938 00:40:33.938 Latency(us) 00:40:33.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:33.938 =================================================================================================================== 00:40:33.938 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:33.938 13:52:08 keyring_file -- common/autotest_common.sh@972 -- # wait 490215 00:40:34.872 13:52:09 keyring_file -- keyring/file.sh@21 -- # killprocess 488478 00:40:34.872 13:52:09 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 488478 ']' 00:40:34.872 13:52:09 keyring_file -- common/autotest_common.sh@952 -- # kill -0 488478 00:40:34.872 13:52:09 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:34.872 13:52:09 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:34.872 13:52:09 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 488478 00:40:34.872 13:52:09 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:34.872 13:52:09 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:34.872 13:52:09 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 488478' 00:40:34.872 killing process with pid 488478 00:40:34.872 13:52:09 keyring_file -- common/autotest_common.sh@967 -- # kill 488478 00:40:34.872 [2024-07-13 13:52:09.614025] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:40:34.872 13:52:09 keyring_file -- common/autotest_common.sh@972 -- # wait 488478 00:40:37.404 00:40:37.404 real 0m19.363s 00:40:37.404 user 0m42.513s 00:40:37.404 sys 0m3.767s 00:40:37.404 13:52:12 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:37.404 13:52:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:37.404 ************************************ 00:40:37.404 END TEST keyring_file 00:40:37.404 ************************************ 00:40:37.404 13:52:12 -- common/autotest_common.sh@1142 -- # return 0 00:40:37.404 13:52:12 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:40:37.404 13:52:12 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:37.404 13:52:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:37.404 13:52:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:37.404 13:52:12 -- common/autotest_common.sh@10 -- # set +x 00:40:37.404 ************************************ 00:40:37.404 START TEST keyring_linux 00:40:37.404 ************************************ 00:40:37.404 13:52:12 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:37.404 * Looking for test storage... 00:40:37.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:37.404 13:52:12 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:37.404 13:52:12 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:37.404 13:52:12 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:37.404 13:52:12 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:37.404 13:52:12 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:37.404 13:52:12 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:37.404 13:52:12 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.404 13:52:12 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.405 13:52:12 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.405 13:52:12 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:37.405 13:52:12 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.405 13:52:12 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:40:37.405 13:52:12 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:37.405 13:52:12 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:37.405 13:52:12 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:37.405 13:52:12 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:37.405 13:52:12 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:37.405 13:52:12 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:37.405 13:52:12 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:37.405 13:52:12 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:37.405 13:52:12 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:37.405 13:52:12 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:37.405 13:52:12 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:37.405 13:52:12 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:37.405 13:52:12 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:37.405 13:52:12 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:37.405 13:52:12 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:37.405 13:52:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:37.405 13:52:12 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:37.405 13:52:12 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:37.405 13:52:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:37.405 13:52:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:37.405 13:52:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:37.405 13:52:12 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:37.405 13:52:12 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:40:37.405 13:52:12 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:37.405 13:52:12 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:40:37.405 13:52:12 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:40:37.405 13:52:12 keyring_linux -- nvmf/common.sh@705 -- # python - 00:40:37.663 13:52:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:37.663 13:52:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:37.663 /tmp/:spdk-test:key0 00:40:37.663 13:52:12 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:37.663 13:52:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:37.663 13:52:12 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:37.663 13:52:12 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:37.663 13:52:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:37.663 13:52:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:37.663 13:52:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:37.663 13:52:12 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:37.663 13:52:12 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:40:37.663 13:52:12 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:37.663 13:52:12 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:40:37.663 13:52:12 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:40:37.663 13:52:12 keyring_linux -- nvmf/common.sh@705 -- # python - 00:40:37.663 13:52:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:37.663 13:52:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:37.663 /tmp/:spdk-test:key1 00:40:37.663 13:52:12 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=490974 00:40:37.663 13:52:12 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:37.663 13:52:12 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 490974 00:40:37.663 13:52:12 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 490974 ']' 00:40:37.664 13:52:12 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:37.664 13:52:12 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:37.664 13:52:12 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:37.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:37.664 13:52:12 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:37.664 13:52:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:37.664 [2024-07-13 13:52:12.293211] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:37.664 [2024-07-13 13:52:12.293371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490974 ] 00:40:37.664 EAL: No free 2048 kB hugepages reported on node 1 00:40:37.922 [2024-07-13 13:52:12.422495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:37.922 [2024-07-13 13:52:12.644585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:38.853 13:52:13 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:38.853 13:52:13 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:40:38.853 13:52:13 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:38.853 13:52:13 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.853 13:52:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:38.853 [2024-07-13 13:52:13.508947] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:38.853 null0 00:40:38.853 [2024-07-13 13:52:13.540971] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:38.854 [2024-07-13 13:52:13.541558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:38.854 13:52:13 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.854 13:52:13 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:38.854 354678513 00:40:38.854 13:52:13 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:38.854 608375187 00:40:38.854 13:52:13 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=491116 00:40:38.854 13:52:13 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:38.854 13:52:13 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 491116 /var/tmp/bperf.sock 00:40:38.854 13:52:13 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 491116 ']' 00:40:38.854 13:52:13 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:38.854 13:52:13 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:38.854 13:52:13 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:38.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:38.854 13:52:13 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:38.854 13:52:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:39.111 [2024-07-13 13:52:13.639217] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:39.111 [2024-07-13 13:52:13.639366] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491116 ] 00:40:39.111 EAL: No free 2048 kB hugepages reported on node 1 00:40:39.111 [2024-07-13 13:52:13.768516] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:39.368 [2024-07-13 13:52:14.023052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:39.932 13:52:14 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:39.932 13:52:14 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:40:39.932 13:52:14 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:39.932 13:52:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:40.188 13:52:14 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:40.188 13:52:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:40.753 13:52:15 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:40.753 13:52:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:41.011 [2024-07-13 13:52:15.614709] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:41.011 nvme0n1 00:40:41.011 13:52:15 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:41.011 13:52:15 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:41.011 13:52:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:41.011 13:52:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:41.011 13:52:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:41.011 13:52:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:41.269 13:52:15 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:41.269 13:52:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:41.269 13:52:15 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:41.269 13:52:15 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:41.269 13:52:15 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:41.269 13:52:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:41.269 13:52:15 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:41.526 13:52:16 keyring_linux -- keyring/linux.sh@25 -- # sn=354678513 00:40:41.526 13:52:16 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:41.526 13:52:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:41.526 13:52:16 keyring_linux -- keyring/linux.sh@26 -- # [[ 354678513 == \3\5\4\6\7\8\5\1\3 ]] 00:40:41.526 13:52:16 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 354678513 00:40:41.526 13:52:16 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:41.526 13:52:16 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:41.784 Running I/O for 1 seconds... 00:40:42.718 00:40:42.718 Latency(us) 00:40:42.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:42.718 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:42.718 nvme0n1 : 1.03 3533.77 13.80 0.00 0.00 35791.57 12718.84 49321.91 00:40:42.718 =================================================================================================================== 00:40:42.718 Total : 3533.77 13.80 0.00 0.00 35791.57 12718.84 49321.91 00:40:42.718 0 00:40:42.718 13:52:17 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:42.718 13:52:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:42.976 13:52:17 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:42.976 13:52:17 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:42.976 13:52:17 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:42.976 13:52:17 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:42.976 13:52:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:42.976 13:52:17 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:43.233 13:52:17 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:43.233 13:52:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:43.233 13:52:17 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:43.233 13:52:17 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:43.233 13:52:17 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:40:43.233 13:52:17 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:43.233 13:52:17 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:43.233 13:52:17 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:43.233 13:52:17 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:43.233 13:52:17 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:43.233 13:52:17 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:43.233 13:52:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:43.491 [2024-07-13 13:52:18.109499] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:43.491 [2024-07-13 13:52:18.110349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (107): Transport endpoint is not connected 00:40:43.491 [2024-07-13 13:52:18.111317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (9): Bad file descriptor 00:40:43.491 [2024-07-13 13:52:18.112309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:43.491 [2024-07-13 13:52:18.112349] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:43.491 [2024-07-13 13:52:18.112378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:43.491 request: 00:40:43.491 { 00:40:43.491 "name": "nvme0", 00:40:43.491 "trtype": "tcp", 00:40:43.491 "traddr": "127.0.0.1", 00:40:43.491 "adrfam": "ipv4", 00:40:43.491 "trsvcid": "4420", 00:40:43.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:43.491 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:43.491 "prchk_reftag": false, 00:40:43.491 "prchk_guard": false, 00:40:43.491 "hdgst": false, 00:40:43.491 "ddgst": false, 00:40:43.491 "psk": ":spdk-test:key1", 00:40:43.491 "method": "bdev_nvme_attach_controller", 00:40:43.491 "req_id": 1 00:40:43.491 } 00:40:43.491 Got JSON-RPC error response 00:40:43.491 response: 00:40:43.491 { 00:40:43.491 "code": -5, 00:40:43.491 "message": "Input/output error" 00:40:43.491 } 00:40:43.491 13:52:18 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:40:43.491 13:52:18 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:43.491 13:52:18 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:43.491 13:52:18 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:43.491 13:52:18 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:43.491 13:52:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:43.491 13:52:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:43.491 13:52:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:43.491 13:52:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:43.491 13:52:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:43.491 13:52:18 keyring_linux -- keyring/linux.sh@33 -- # sn=354678513 00:40:43.491 13:52:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 354678513 00:40:43.491 1 links removed 00:40:43.491 13:52:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:43.491 13:52:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:43.491 13:52:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:43.491 13:52:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:43.491 13:52:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:43.491 13:52:18 keyring_linux -- keyring/linux.sh@33 -- # sn=608375187 00:40:43.491 13:52:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 608375187 00:40:43.491 1 links removed 00:40:43.491 13:52:18 keyring_linux -- keyring/linux.sh@41 -- # killprocess 491116 00:40:43.491 13:52:18 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 491116 ']' 00:40:43.491 13:52:18 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 491116 00:40:43.491 13:52:18 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:40:43.491 13:52:18 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:43.491 13:52:18 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 491116 00:40:43.491 13:52:18 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:43.491 13:52:18 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:43.491 13:52:18 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 491116' 00:40:43.491 killing process with pid 491116 00:40:43.491 13:52:18 keyring_linux -- common/autotest_common.sh@967 -- # kill 491116 00:40:43.491 Received shutdown signal, test time was about 1.000000 seconds 00:40:43.491 00:40:43.491 Latency(us) 00:40:43.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:43.491 =================================================================================================================== 00:40:43.492 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:43.492 13:52:18 keyring_linux -- common/autotest_common.sh@972 -- # wait 491116 00:40:44.899 13:52:19 keyring_linux -- keyring/linux.sh@42 -- # killprocess 490974 00:40:44.899 13:52:19 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 490974 ']' 00:40:44.899 13:52:19 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 490974 00:40:44.899 13:52:19 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:40:44.899 13:52:19 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:44.899 13:52:19 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 490974 00:40:44.899 13:52:19 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:44.899 13:52:19 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:44.899 13:52:19 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 490974' 00:40:44.899 killing process with pid 490974 00:40:44.899 13:52:19 keyring_linux -- common/autotest_common.sh@967 -- # kill 490974 00:40:44.899 13:52:19 keyring_linux -- common/autotest_common.sh@972 -- # wait 490974 00:40:47.429 00:40:47.429 real 0m9.599s 00:40:47.429 user 0m15.909s 00:40:47.429 sys 0m1.844s 00:40:47.429 13:52:21 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:47.429 13:52:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:47.429 ************************************ 00:40:47.429 END TEST keyring_linux 00:40:47.429 ************************************ 00:40:47.429 13:52:21 -- common/autotest_common.sh@1142 -- # return 0 00:40:47.429 13:52:21 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:40:47.429 13:52:21 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:40:47.429 13:52:21 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:40:47.429 13:52:21 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:40:47.429 13:52:21 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:40:47.429 13:52:21 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:40:47.429 13:52:21 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:40:47.429 13:52:21 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:40:47.429 13:52:21 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:40:47.429 13:52:21 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:40:47.429 13:52:21 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:40:47.429 13:52:21 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:40:47.429 13:52:21 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:40:47.429 13:52:21 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:40:47.429 13:52:21 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:40:47.429 13:52:21 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:40:47.430 13:52:21 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:40:47.430 13:52:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:47.430 13:52:21 -- common/autotest_common.sh@10 -- # set +x 00:40:47.430 13:52:21 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:40:47.430 13:52:21 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:40:47.430 13:52:21 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:40:47.430 13:52:21 -- common/autotest_common.sh@10 -- # set +x 00:40:48.828 INFO: APP EXITING 00:40:48.828 INFO: killing all VMs 00:40:48.828 INFO: killing vhost app 00:40:48.828 INFO: EXIT DONE 00:40:50.201 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:40:50.201 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:40:50.201 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:40:50.201 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:40:50.201 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:40:50.201 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:40:50.201 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:40:50.201 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:40:50.201 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:40:50.201 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:40:50.201 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:40:50.201 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:40:50.201 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:40:50.201 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:40:50.201 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:40:50.201 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:40:50.201 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:40:51.577 Cleaning 00:40:51.577 Removing: /var/run/dpdk/spdk0/config 00:40:51.577 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:51.577 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:51.577 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:51.577 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:51.577 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:51.577 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:51.577 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:51.577 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:51.577 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:51.577 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:51.577 Removing: /var/run/dpdk/spdk1/config 00:40:51.577 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:51.577 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:51.577 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:51.577 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:51.577 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:51.577 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:51.577 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:51.577 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:51.577 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:51.577 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:51.577 Removing: /var/run/dpdk/spdk1/mp_socket 00:40:51.577 Removing: /var/run/dpdk/spdk2/config 00:40:51.577 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:51.577 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:51.577 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:51.577 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:51.577 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:51.577 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:51.577 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:51.577 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:51.577 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:51.577 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:51.577 Removing: /var/run/dpdk/spdk3/config 00:40:51.577 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:51.577 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:51.577 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:51.577 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:51.577 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:51.577 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:51.577 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:51.577 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:51.577 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:51.577 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:51.577 Removing: /var/run/dpdk/spdk4/config 00:40:51.577 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:51.577 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:51.577 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:51.577 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:51.577 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:51.577 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:51.577 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:51.577 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:51.577 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:51.577 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:51.577 Removing: /dev/shm/bdev_svc_trace.1 00:40:51.577 Removing: /dev/shm/nvmf_trace.0 00:40:51.577 Removing: /dev/shm/spdk_tgt_trace.pid141453 00:40:51.577 Removing: /var/run/dpdk/spdk0 00:40:51.577 Removing: /var/run/dpdk/spdk1 00:40:51.577 Removing: /var/run/dpdk/spdk2 00:40:51.577 Removing: /var/run/dpdk/spdk3 00:40:51.577 Removing: /var/run/dpdk/spdk4 00:40:51.577 Removing: /var/run/dpdk/spdk_pid138585 00:40:51.577 Removing: /var/run/dpdk/spdk_pid139698 00:40:51.577 Removing: /var/run/dpdk/spdk_pid141453 00:40:51.577 Removing: /var/run/dpdk/spdk_pid142171 00:40:51.577 Removing: /var/run/dpdk/spdk_pid143124 00:40:51.577 Removing: /var/run/dpdk/spdk_pid143549 00:40:51.577 Removing: /var/run/dpdk/spdk_pid144533 00:40:51.577 Removing: /var/run/dpdk/spdk_pid144678 00:40:51.577 Removing: /var/run/dpdk/spdk_pid145304 00:40:51.577 Removing: /var/run/dpdk/spdk_pid146647 00:40:51.577 Removing: /var/run/dpdk/spdk_pid147819 00:40:51.577 Removing: /var/run/dpdk/spdk_pid148527 00:40:51.577 Removing: /var/run/dpdk/spdk_pid148992 00:40:51.577 Removing: /var/run/dpdk/spdk_pid149597 00:40:51.577 Removing: /var/run/dpdk/spdk_pid150178 00:40:51.577 Removing: /var/run/dpdk/spdk_pid150466 00:40:51.577 Removing: /var/run/dpdk/spdk_pid150633 00:40:51.577 Removing: /var/run/dpdk/spdk_pid150938 00:40:51.577 Removing: /var/run/dpdk/spdk_pid151390 00:40:51.577 Removing: /var/run/dpdk/spdk_pid154627 00:40:51.577 Removing: /var/run/dpdk/spdk_pid155186 00:40:51.577 Removing: /var/run/dpdk/spdk_pid155752 00:40:51.577 Removing: /var/run/dpdk/spdk_pid155888 00:40:51.577 Removing: /var/run/dpdk/spdk_pid157248 00:40:51.577 Removing: /var/run/dpdk/spdk_pid157404 00:40:51.577 Removing: /var/run/dpdk/spdk_pid158763 00:40:51.577 Removing: /var/run/dpdk/spdk_pid158905 00:40:51.577 Removing: /var/run/dpdk/spdk_pid159339 00:40:51.577 Removing: /var/run/dpdk/spdk_pid159477 00:40:51.577 Removing: /var/run/dpdk/spdk_pid159908 00:40:51.577 Removing: /var/run/dpdk/spdk_pid160054 00:40:51.577 Removing: /var/run/dpdk/spdk_pid161084 00:40:51.577 Removing: /var/run/dpdk/spdk_pid161365 00:40:51.577 Removing: /var/run/dpdk/spdk_pid161685 00:40:51.577 Removing: /var/run/dpdk/spdk_pid162170 00:40:51.577 Removing: /var/run/dpdk/spdk_pid162410 00:40:51.577 Removing: /var/run/dpdk/spdk_pid162729 00:40:51.577 Removing: /var/run/dpdk/spdk_pid163024 00:40:51.577 Removing: /var/run/dpdk/spdk_pid163417 00:40:51.577 Removing: /var/run/dpdk/spdk_pid163721 00:40:51.577 Removing: /var/run/dpdk/spdk_pid164016 00:40:51.577 Removing: /var/run/dpdk/spdk_pid164429 00:40:51.577 Removing: /var/run/dpdk/spdk_pid164715 00:40:51.577 Removing: /var/run/dpdk/spdk_pid165126 00:40:51.578 Removing: /var/run/dpdk/spdk_pid165417 00:40:51.578 Removing: /var/run/dpdk/spdk_pid165799 00:40:51.578 Removing: /var/run/dpdk/spdk_pid166116 00:40:51.578 Removing: /var/run/dpdk/spdk_pid166408 00:40:51.578 Removing: /var/run/dpdk/spdk_pid166819 00:40:51.578 Removing: /var/run/dpdk/spdk_pid167104 00:40:51.578 Removing: /var/run/dpdk/spdk_pid167514 00:40:51.578 Removing: /var/run/dpdk/spdk_pid167812 00:40:51.578 Removing: /var/run/dpdk/spdk_pid168120 00:40:51.578 Removing: /var/run/dpdk/spdk_pid168510 00:40:51.578 Removing: /var/run/dpdk/spdk_pid168811 00:40:51.578 Removing: /var/run/dpdk/spdk_pid169218 00:40:51.578 Removing: /var/run/dpdk/spdk_pid169509 00:40:51.578 Removing: /var/run/dpdk/spdk_pid169839 00:40:51.578 Removing: /var/run/dpdk/spdk_pid170442 00:40:51.578 Removing: /var/run/dpdk/spdk_pid173011 00:40:51.578 Removing: /var/run/dpdk/spdk_pid229037 00:40:51.578 Removing: /var/run/dpdk/spdk_pid231801 00:40:51.578 Removing: /var/run/dpdk/spdk_pid239618 00:40:51.578 Removing: /var/run/dpdk/spdk_pid243045 00:40:51.578 Removing: /var/run/dpdk/spdk_pid245565 00:40:51.578 Removing: /var/run/dpdk/spdk_pid246062 00:40:51.578 Removing: /var/run/dpdk/spdk_pid250157 00:40:51.578 Removing: /var/run/dpdk/spdk_pid255989 00:40:51.578 Removing: /var/run/dpdk/spdk_pid256285 00:40:51.578 Removing: /var/run/dpdk/spdk_pid259184 00:40:51.578 Removing: /var/run/dpdk/spdk_pid263135 00:40:51.578 Removing: /var/run/dpdk/spdk_pid265440 00:40:51.578 Removing: /var/run/dpdk/spdk_pid272608 00:40:51.578 Removing: /var/run/dpdk/spdk_pid278697 00:40:51.578 Removing: /var/run/dpdk/spdk_pid280139 00:40:51.578 Removing: /var/run/dpdk/spdk_pid280943 00:40:51.578 Removing: /var/run/dpdk/spdk_pid292040 00:40:51.578 Removing: /var/run/dpdk/spdk_pid294524 00:40:51.578 Removing: /var/run/dpdk/spdk_pid320624 00:40:51.578 Removing: /var/run/dpdk/spdk_pid323707 00:40:51.578 Removing: /var/run/dpdk/spdk_pid324884 00:40:51.578 Removing: /var/run/dpdk/spdk_pid326337 00:40:51.578 Removing: /var/run/dpdk/spdk_pid326611 00:40:51.578 Removing: /var/run/dpdk/spdk_pid326881 00:40:51.578 Removing: /var/run/dpdk/spdk_pid327158 00:40:51.578 Removing: /var/run/dpdk/spdk_pid327987 00:40:51.578 Removing: /var/run/dpdk/spdk_pid329543 00:40:51.578 Removing: /var/run/dpdk/spdk_pid331319 00:40:51.578 Removing: /var/run/dpdk/spdk_pid332012 00:40:51.578 Removing: /var/run/dpdk/spdk_pid333903 00:40:51.578 Removing: /var/run/dpdk/spdk_pid334717 00:40:51.578 Removing: /var/run/dpdk/spdk_pid335549 00:40:51.578 Removing: /var/run/dpdk/spdk_pid338322 00:40:51.578 Removing: /var/run/dpdk/spdk_pid341970 00:40:51.578 Removing: /var/run/dpdk/spdk_pid345498 00:40:51.578 Removing: /var/run/dpdk/spdk_pid370194 00:40:51.578 Removing: /var/run/dpdk/spdk_pid373096 00:40:51.578 Removing: /var/run/dpdk/spdk_pid377237 00:40:51.578 Removing: /var/run/dpdk/spdk_pid378715 00:40:51.578 Removing: /var/run/dpdk/spdk_pid380449 00:40:51.578 Removing: /var/run/dpdk/spdk_pid383398 00:40:51.578 Removing: /var/run/dpdk/spdk_pid386232 00:40:51.578 Removing: /var/run/dpdk/spdk_pid391373 00:40:51.578 Removing: /var/run/dpdk/spdk_pid391380 00:40:51.578 Removing: /var/run/dpdk/spdk_pid394415 00:40:51.578 Removing: /var/run/dpdk/spdk_pid394667 00:40:51.578 Removing: /var/run/dpdk/spdk_pid394809 00:40:51.578 Removing: /var/run/dpdk/spdk_pid395078 00:40:51.578 Removing: /var/run/dpdk/spdk_pid395200 00:40:51.578 Removing: /var/run/dpdk/spdk_pid396284 00:40:51.578 Removing: /var/run/dpdk/spdk_pid397578 00:40:51.578 Removing: /var/run/dpdk/spdk_pid398759 00:40:51.578 Removing: /var/run/dpdk/spdk_pid399936 00:40:51.578 Removing: /var/run/dpdk/spdk_pid401115 00:40:51.578 Removing: /var/run/dpdk/spdk_pid402430 00:40:51.578 Removing: /var/run/dpdk/spdk_pid406358 00:40:51.578 Removing: /var/run/dpdk/spdk_pid406811 00:40:51.578 Removing: /var/run/dpdk/spdk_pid408083 00:40:51.578 Removing: /var/run/dpdk/spdk_pid408939 00:40:51.578 Removing: /var/run/dpdk/spdk_pid412931 00:40:51.836 Removing: /var/run/dpdk/spdk_pid415128 00:40:51.836 Removing: /var/run/dpdk/spdk_pid419436 00:40:51.836 Removing: /var/run/dpdk/spdk_pid423015 00:40:51.836 Removing: /var/run/dpdk/spdk_pid429632 00:40:51.836 Removing: /var/run/dpdk/spdk_pid434223 00:40:51.836 Removing: /var/run/dpdk/spdk_pid434226 00:40:51.836 Removing: /var/run/dpdk/spdk_pid446829 00:40:51.836 Removing: /var/run/dpdk/spdk_pid447494 00:40:51.836 Removing: /var/run/dpdk/spdk_pid448164 00:40:51.836 Removing: /var/run/dpdk/spdk_pid448904 00:40:51.836 Removing: /var/run/dpdk/spdk_pid450430 00:40:51.836 Removing: /var/run/dpdk/spdk_pid450973 00:40:51.836 Removing: /var/run/dpdk/spdk_pid451633 00:40:51.836 Removing: /var/run/dpdk/spdk_pid452293 00:40:51.836 Removing: /var/run/dpdk/spdk_pid455073 00:40:51.836 Removing: /var/run/dpdk/spdk_pid455393 00:40:51.836 Removing: /var/run/dpdk/spdk_pid459393 00:40:51.836 Removing: /var/run/dpdk/spdk_pid459694 00:40:51.836 Removing: /var/run/dpdk/spdk_pid461429 00:40:51.836 Removing: /var/run/dpdk/spdk_pid466728 00:40:51.836 Removing: /var/run/dpdk/spdk_pid466823 00:40:51.836 Removing: /var/run/dpdk/spdk_pid469880 00:40:51.836 Removing: /var/run/dpdk/spdk_pid471398 00:40:51.836 Removing: /var/run/dpdk/spdk_pid472920 00:40:51.836 Removing: /var/run/dpdk/spdk_pid473870 00:40:51.836 Removing: /var/run/dpdk/spdk_pid475433 00:40:51.836 Removing: /var/run/dpdk/spdk_pid476425 00:40:51.836 Removing: /var/run/dpdk/spdk_pid482689 00:40:51.836 Removing: /var/run/dpdk/spdk_pid483078 00:40:51.836 Removing: /var/run/dpdk/spdk_pid483470 00:40:51.836 Removing: /var/run/dpdk/spdk_pid485358 00:40:51.836 Removing: /var/run/dpdk/spdk_pid485642 00:40:51.836 Removing: /var/run/dpdk/spdk_pid486037 00:40:51.836 Removing: /var/run/dpdk/spdk_pid488478 00:40:51.836 Removing: /var/run/dpdk/spdk_pid488625 00:40:51.836 Removing: /var/run/dpdk/spdk_pid490215 00:40:51.836 Removing: /var/run/dpdk/spdk_pid490974 00:40:51.836 Removing: /var/run/dpdk/spdk_pid491116 00:40:51.836 Clean 00:40:51.836 13:52:26 -- common/autotest_common.sh@1451 -- # return 0 00:40:51.836 13:52:26 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:40:51.836 13:52:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:51.836 13:52:26 -- common/autotest_common.sh@10 -- # set +x 00:40:51.836 13:52:26 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:40:51.836 13:52:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:51.836 13:52:26 -- common/autotest_common.sh@10 -- # set +x 00:40:51.836 13:52:26 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:51.836 13:52:26 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:51.836 13:52:26 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:51.836 13:52:26 -- spdk/autotest.sh@391 -- # hash lcov 00:40:51.836 13:52:26 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:40:51.836 13:52:26 -- spdk/autotest.sh@393 -- # hostname 00:40:51.836 13:52:26 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:52.093 geninfo: WARNING: invalid characters removed from testname! 00:41:18.619 13:52:53 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:22.796 13:52:56 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:25.355 13:52:59 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:27.879 13:53:02 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:31.154 13:53:05 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:33.680 13:53:08 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:36.960 13:53:10 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:41:36.960 13:53:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:36.960 13:53:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:41:36.960 13:53:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:36.960 13:53:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:36.960 13:53:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.960 13:53:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.960 13:53:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.960 13:53:11 -- paths/export.sh@5 -- $ export PATH 00:41:36.960 13:53:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.960 13:53:11 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:41:36.960 13:53:11 -- common/autobuild_common.sh@444 -- $ date +%s 00:41:36.960 13:53:11 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720871591.XXXXXX 00:41:36.960 13:53:11 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720871591.ri2RCZ 00:41:36.960 13:53:11 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:41:36.960 13:53:11 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:41:36.960 13:53:11 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:41:36.960 13:53:11 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:41:36.960 13:53:11 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:41:36.960 13:53:11 -- common/autobuild_common.sh@460 -- $ get_config_params 00:41:36.960 13:53:11 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:41:36.960 13:53:11 -- common/autotest_common.sh@10 -- $ set +x 00:41:36.960 13:53:11 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:41:36.960 13:53:11 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:41:36.960 13:53:11 -- pm/common@17 -- $ local monitor 00:41:36.960 13:53:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:36.960 13:53:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:36.960 13:53:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:36.960 13:53:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:36.960 13:53:11 -- pm/common@21 -- $ date +%s 00:41:36.960 13:53:11 -- pm/common@21 -- $ date +%s 00:41:36.960 13:53:11 -- pm/common@25 -- $ sleep 1 00:41:36.960 13:53:11 -- pm/common@21 -- $ date +%s 00:41:36.960 13:53:11 -- pm/common@21 -- $ date +%s 00:41:36.960 13:53:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720871591 00:41:36.960 13:53:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720871591 00:41:36.960 13:53:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720871591 00:41:36.960 13:53:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720871591 00:41:36.960 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720871591_collect-vmstat.pm.log 00:41:36.960 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720871591_collect-cpu-load.pm.log 00:41:36.960 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720871591_collect-cpu-temp.pm.log 00:41:36.960 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720871591_collect-bmc-pm.bmc.pm.log 00:41:37.527 13:53:12 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:41:37.527 13:53:12 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:41:37.527 13:53:12 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:37.527 13:53:12 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:41:37.527 13:53:12 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:41:37.527 13:53:12 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:41:37.527 13:53:12 -- spdk/autopackage.sh@19 -- $ timing_finish 00:41:37.527 13:53:12 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:37.527 13:53:12 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:41:37.527 13:53:12 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:37.527 13:53:12 -- spdk/autopackage.sh@20 -- $ exit 0 00:41:37.527 13:53:12 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:41:37.527 13:53:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:41:37.527 13:53:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:41:37.527 13:53:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:37.527 13:53:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:41:37.527 13:53:12 -- pm/common@44 -- $ pid=503608 00:41:37.527 13:53:12 -- pm/common@50 -- $ kill -TERM 503608 00:41:37.527 13:53:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:37.527 13:53:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:41:37.527 13:53:12 -- pm/common@44 -- $ pid=503610 00:41:37.527 13:53:12 -- pm/common@50 -- $ kill -TERM 503610 00:41:37.527 13:53:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:37.527 13:53:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:41:37.527 13:53:12 -- pm/common@44 -- $ pid=503612 00:41:37.527 13:53:12 -- pm/common@50 -- $ kill -TERM 503612 00:41:37.527 13:53:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:37.527 13:53:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:41:37.527 13:53:12 -- pm/common@44 -- $ pid=503640 00:41:37.527 13:53:12 -- pm/common@50 -- $ sudo -E kill -TERM 503640 00:41:37.527 + [[ -n 52157 ]] 00:41:37.527 + sudo kill 52157 00:41:37.538 [Pipeline] } 00:41:37.558 [Pipeline] // stage 00:41:37.563 [Pipeline] } 00:41:37.577 [Pipeline] // timeout 00:41:37.582 [Pipeline] } 00:41:37.597 [Pipeline] // catchError 00:41:37.601 [Pipeline] } 00:41:37.616 [Pipeline] // wrap 00:41:37.622 [Pipeline] } 00:41:37.635 [Pipeline] // catchError 00:41:37.643 [Pipeline] stage 00:41:37.644 [Pipeline] { (Epilogue) 00:41:37.656 [Pipeline] catchError 00:41:37.658 [Pipeline] { 00:41:37.669 [Pipeline] echo 00:41:37.671 Cleanup processes 00:41:37.675 [Pipeline] sh 00:41:37.950 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:37.950 503751 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:41:37.950 503873 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:37.964 [Pipeline] sh 00:41:38.241 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:38.242 ++ grep -v 'sudo pgrep' 00:41:38.242 ++ awk '{print $1}' 00:41:38.242 + sudo kill -9 503751 00:41:38.273 [Pipeline] sh 00:41:38.556 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:48.519 [Pipeline] sh 00:41:48.795 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:48.795 Artifacts sizes are good 00:41:48.809 [Pipeline] archiveArtifacts 00:41:48.816 Archiving artifacts 00:41:49.027 [Pipeline] sh 00:41:49.306 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:49.316 [Pipeline] cleanWs 00:41:49.322 [WS-CLEANUP] Deleting project workspace... 00:41:49.323 [WS-CLEANUP] Deferred wipeout is used... 00:41:49.328 [WS-CLEANUP] done 00:41:49.329 [Pipeline] } 00:41:49.346 [Pipeline] // catchError 00:41:49.355 [Pipeline] sh 00:41:49.626 + logger -p user.info -t JENKINS-CI 00:41:49.634 [Pipeline] } 00:41:49.649 [Pipeline] // stage 00:41:49.654 [Pipeline] } 00:41:49.670 [Pipeline] // node 00:41:49.675 [Pipeline] End of Pipeline 00:41:49.705 Finished: SUCCESS